Feb 17 15:00:07.163237 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:00:08.394225 master-0 kubenswrapper[4167]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:00:08.397122 master-0 kubenswrapper[4167]: I0217 15:00:08.396914 4167 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404612 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404646 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404656 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404665 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404673 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:00:08.404671 master-0 kubenswrapper[4167]: W0217 15:00:08.404682 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404690 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404698 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404706 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404716 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404727 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404736 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404744 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404752 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404760 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404768 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404776 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404784 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404793 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404802 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404831 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404842 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404856 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404869 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:00:08.405066 master-0 kubenswrapper[4167]: W0217 15:00:08.404882 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404892 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404902 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404910 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404918 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404925 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404934 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404945 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404955 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404964 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404973 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404982 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.404991 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405004 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405012 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405021 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405029 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405037 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405045 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405053 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:00:08.406092 master-0 kubenswrapper[4167]: W0217 15:00:08.405061 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405069 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405076 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405084 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405092 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405108 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405117 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405125 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405133 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405141 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405149 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405156 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405164 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405172 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405180 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405187 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405195 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405203 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405211 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405218 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:00:08.407393 master-0 kubenswrapper[4167]: W0217 15:00:08.405225 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405233 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405241 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405249 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405256 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405264 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405274 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: W0217 15:00:08.405286 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406538 4167 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406571 4167 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406587 4167 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406599 4167 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406610 4167 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406621 4167 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406633 4167 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406644 4167 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406654 4167 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406663 4167 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406674 4167 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406684 4167 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406693 4167 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406705 4167 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:00:08.408679 master-0 kubenswrapper[4167]: I0217 15:00:08.406714 4167 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406724 4167 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406734 4167 flags.go:64] FLAG: --cloud-config="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406743 4167 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406752 4167 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406765 4167 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406814 4167 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406828 4167 flags.go:64] FLAG: --config-dir="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406840 4167 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406853 4167 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406867 4167 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406878 4167 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406890 4167 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406903 4167 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406912 4167 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406921 4167 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406930 4167 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406941 4167 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406951 4167 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406962 4167 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406972 4167 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406981 4167 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.406990 4167 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.407000 4167 flags.go:64] FLAG: --enable-server="true" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.407010 4167 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:00:08.409928 master-0 kubenswrapper[4167]: I0217 15:00:08.407021 4167 flags.go:64] FLAG: --event-burst="100" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407031 4167 flags.go:64] FLAG: --event-qps="50" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407040 4167 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407049 4167 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407059 4167 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407070 4167 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407079 4167 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407088 4167 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407098 4167 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407108 4167 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407119 4167 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407129 4167 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407138 4167 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407147 4167 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407157 4167 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407166 4167 flags.go:64] FLAG: --feature-gates="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407177 4167 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407186 4167 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407195 4167 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407205 4167 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407214 4167 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407224 4167 flags.go:64] FLAG: --help="false" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407234 4167 flags.go:64] FLAG: --hostname-override="" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407243 4167 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407253 4167 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:00:08.411420 master-0 kubenswrapper[4167]: I0217 15:00:08.407263 4167 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407272 4167 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407281 4167 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407290 4167 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407299 4167 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407309 4167 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407318 4167 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407328 4167 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407337 4167 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407347 4167 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407356 4167 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407365 4167 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407375 4167 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407385 4167 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407394 4167 flags.go:64] FLAG: --lock-file="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407407 4167 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407416 4167 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407426 4167 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407440 4167 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407449 4167 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407489 4167 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407514 4167 flags.go:64] FLAG: --logging-format="text" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407525 4167 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407536 4167 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407545 4167 flags.go:64] FLAG: --manifest-url="" Feb 17 15:00:08.412766 master-0 kubenswrapper[4167]: I0217 15:00:08.407554 4167 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407568 4167 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407578 4167 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407589 4167 flags.go:64] FLAG: --max-pods="110" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407599 4167 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407608 4167 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407620 4167 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407629 4167 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407640 4167 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407650 4167 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407660 4167 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407683 4167 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407692 4167 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407701 4167 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407711 4167 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407720 4167 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407734 4167 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407743 4167 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407752 4167 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407762 4167 flags.go:64] FLAG: --port="10250" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407771 4167 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407781 4167 flags.go:64] FLAG: --provider-id="" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407791 4167 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407802 4167 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:00:08.414246 master-0 kubenswrapper[4167]: I0217 15:00:08.407814 4167 flags.go:64] FLAG: --register-node="true" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407826 4167 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407838 4167 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407856 4167 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407865 4167 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407875 4167 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407885 4167 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407897 4167 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407907 4167 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407916 4167 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407925 4167 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407934 4167 flags.go:64] FLAG: --runonce="false" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407944 4167 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407953 4167 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407963 4167 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407972 4167 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407983 4167 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.407993 4167 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408002 4167 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408012 4167 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408021 4167 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408030 4167 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408040 4167 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408049 4167 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408059 4167 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:00:08.415491 master-0 kubenswrapper[4167]: I0217 15:00:08.408068 4167 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408077 4167 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408093 4167 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408104 4167 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408113 4167 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408126 4167 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408135 4167 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408144 4167 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408153 4167 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408163 4167 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408173 4167 flags.go:64] FLAG: --v="2" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408185 4167 flags.go:64] FLAG: --version="false" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408196 4167 flags.go:64] FLAG: --vmodule="" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408207 4167 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: I0217 15:00:08.408217 4167 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408414 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408425 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408434 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408442 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408452 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408490 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408499 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:00:08.416696 master-0 kubenswrapper[4167]: W0217 15:00:08.408508 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408520 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408532 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408542 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408550 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408559 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408567 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408576 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408584 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408592 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408600 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408607 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408619 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408629 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408638 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408646 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408654 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408662 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408671 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:00:08.418094 master-0 kubenswrapper[4167]: W0217 15:00:08.408680 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408688 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408696 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408703 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408712 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408719 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408727 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408735 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408743 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408752 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408760 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408768 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408776 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408784 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408796 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408806 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408816 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408827 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408837 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408847 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408857 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:00:08.420040 master-0 kubenswrapper[4167]: W0217 15:00:08.408865 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408873 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408881 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408888 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408897 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408905 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408913 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408921 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408930 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408940 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408948 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408956 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408964 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408972 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408980 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408988 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.408996 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.409004 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.409012 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:00:08.420724 master-0 kubenswrapper[4167]: W0217 15:00:08.409022 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: W0217 15:00:08.409032 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: W0217 15:00:08.409040 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: W0217 15:00:08.409051 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: W0217 15:00:08.409061 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: W0217 15:00:08.409071 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:00:08.421339 master-0 kubenswrapper[4167]: I0217 15:00:08.409096 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:00:08.423264 master-0 kubenswrapper[4167]: I0217 15:00:08.423203 4167 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 17 15:00:08.423321 master-0 kubenswrapper[4167]: I0217 15:00:08.423263 4167 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:00:08.423510 master-0 kubenswrapper[4167]: W0217 15:00:08.423443 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:00:08.423555 master-0 kubenswrapper[4167]: W0217 15:00:08.423511 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:00:08.423555 master-0 kubenswrapper[4167]: W0217 15:00:08.423527 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:00:08.423555 master-0 kubenswrapper[4167]: W0217 15:00:08.423544 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423556 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423567 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423578 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423588 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423598 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423610 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423625 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423642 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423653 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423663 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:00:08.423676 master-0 kubenswrapper[4167]: W0217 15:00:08.423674 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423685 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423695 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423707 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423717 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423728 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423738 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423750 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423760 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423770 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423784 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423794 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423804 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423814 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423824 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423836 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423846 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423856 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423867 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423877 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:00:08.423996 master-0 kubenswrapper[4167]: W0217 15:00:08.423887 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423902 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423916 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423926 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423937 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423947 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423962 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423976 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423987 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.423997 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424007 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424018 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424028 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424039 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424051 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424062 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424072 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424082 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424093 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:00:08.424713 master-0 kubenswrapper[4167]: W0217 15:00:08.424103 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424113 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424123 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424133 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424143 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424156 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424167 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424178 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424188 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424198 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424208 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424218 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424229 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424239 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424250 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424263 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424274 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424286 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:00:08.425426 master-0 kubenswrapper[4167]: W0217 15:00:08.424298 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: I0217 15:00:08.424315 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424716 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424743 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424757 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424771 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424783 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424794 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424806 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424817 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424828 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424838 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424848 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424858 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424868 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:00:08.426011 master-0 kubenswrapper[4167]: W0217 15:00:08.424881 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424891 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424901 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424909 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424917 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424924 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424933 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424941 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424949 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424957 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424967 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424980 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.424990 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425001 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425012 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425023 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425035 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425046 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425055 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425065 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:00:08.426510 master-0 kubenswrapper[4167]: W0217 15:00:08.425075 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425090 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425105 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425116 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425127 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425140 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425153 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425167 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425181 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425195 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425208 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425220 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425231 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425242 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425252 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425265 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425277 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425289 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:00:08.427220 master-0 kubenswrapper[4167]: W0217 15:00:08.425299 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425310 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425320 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425330 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425341 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425351 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425360 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425368 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425376 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425385 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425394 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425405 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425415 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425426 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425436 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425448 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425492 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425503 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425513 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425524 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:00:08.427749 master-0 kubenswrapper[4167]: W0217 15:00:08.425535 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:00:08.428329 master-0 kubenswrapper[4167]: I0217 15:00:08.425551 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:00:08.428329 master-0 kubenswrapper[4167]: I0217 15:00:08.425888 4167 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:00:08.429854 master-0 kubenswrapper[4167]: I0217 15:00:08.429811 4167 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 17 15:00:08.436869 master-0 kubenswrapper[4167]: I0217 15:00:08.436795 4167 server.go:997] "Starting client certificate rotation" Feb 17 15:00:08.436869 master-0 kubenswrapper[4167]: I0217 15:00:08.436860 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:00:08.437149 master-0 kubenswrapper[4167]: I0217 15:00:08.437080 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:00:08.484645 master-0 kubenswrapper[4167]: I0217 15:00:08.484528 4167 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:00:08.493248 master-0 kubenswrapper[4167]: E0217 15:00:08.493166 4167 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:08.497200 master-0 kubenswrapper[4167]: I0217 15:00:08.497133 4167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:00:08.531130 master-0 kubenswrapper[4167]: I0217 15:00:08.531037 4167 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:00:08.546498 master-0 kubenswrapper[4167]: I0217 15:00:08.546377 4167 log.go:25] "Validated CRI v1 image API" Feb 17 15:00:08.548898 master-0 kubenswrapper[4167]: I0217 15:00:08.548814 4167 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:00:08.556573 master-0 kubenswrapper[4167]: I0217 15:00:08.556495 4167 fs.go:135] Filesystem UUIDs: map[4e612f26-a2b1-4cb3-97c9-965b3561529c:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 17 15:00:08.556573 master-0 kubenswrapper[4167]: I0217 15:00:08.556545 4167 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 17 15:00:08.594709 master-0 kubenswrapper[4167]: I0217 15:00:08.593962 4167 manager.go:217] Machine: {Timestamp:2026-02-17 15:00:08.590713961 +0000 UTC m=+1.125378843 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ff628177d0ed41fb9732e0b0efb95e0a SystemUUID:ff628177-d0ed-41fb-9732-e0b0efb95e0a BootID:1c90f5ae-c817-4d5a-b4dd-067c150502f0 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:79:b8:2d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:97:d0:9b Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:fa:aa:43:f9:eb:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:00:08.594709 master-0 kubenswrapper[4167]: I0217 15:00:08.594597 4167 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:00:08.595149 master-0 kubenswrapper[4167]: I0217 15:00:08.594874 4167 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:00:08.595631 master-0 kubenswrapper[4167]: I0217 15:00:08.595585 4167 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:00:08.596026 master-0 kubenswrapper[4167]: I0217 15:00:08.595960 4167 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:00:08.596363 master-0 kubenswrapper[4167]: I0217 15:00:08.596014 4167 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:00:08.596440 master-0 kubenswrapper[4167]: I0217 15:00:08.596383 4167 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:00:08.596440 master-0 kubenswrapper[4167]: I0217 15:00:08.596407 4167 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:00:08.597123 master-0 kubenswrapper[4167]: I0217 15:00:08.597075 4167 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:00:08.597587 master-0 kubenswrapper[4167]: I0217 15:00:08.597540 4167 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:00:08.598430 master-0 kubenswrapper[4167]: I0217 15:00:08.598382 4167 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:00:08.598603 master-0 kubenswrapper[4167]: I0217 15:00:08.598563 4167 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:00:08.603512 master-0 kubenswrapper[4167]: I0217 15:00:08.603366 4167 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:00:08.603512 master-0 kubenswrapper[4167]: I0217 15:00:08.603418 4167 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:00:08.603765 master-0 kubenswrapper[4167]: I0217 15:00:08.603543 4167 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:00:08.603765 master-0 kubenswrapper[4167]: I0217 15:00:08.603570 4167 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:00:08.603765 master-0 kubenswrapper[4167]: I0217 15:00:08.603598 4167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:00:08.612283 master-0 kubenswrapper[4167]: W0217 15:00:08.612144 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:08.612283 master-0 kubenswrapper[4167]: W0217 15:00:08.612213 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:08.612606 master-0 kubenswrapper[4167]: E0217 15:00:08.612353 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:08.612606 master-0 kubenswrapper[4167]: E0217 15:00:08.612271 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:08.615737 master-0 kubenswrapper[4167]: I0217 15:00:08.615670 4167 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 17 15:00:08.617788 master-0 kubenswrapper[4167]: I0217 15:00:08.617753 4167 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:00:08.618210 master-0 kubenswrapper[4167]: I0217 15:00:08.618176 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:00:08.618262 master-0 kubenswrapper[4167]: I0217 15:00:08.618215 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:00:08.618262 master-0 kubenswrapper[4167]: I0217 15:00:08.618232 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:00:08.618262 master-0 kubenswrapper[4167]: I0217 15:00:08.618246 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:00:08.618337 master-0 kubenswrapper[4167]: I0217 15:00:08.618259 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:00:08.618337 master-0 kubenswrapper[4167]: I0217 15:00:08.618291 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:00:08.618337 master-0 kubenswrapper[4167]: I0217 15:00:08.618308 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:00:08.618337 master-0 kubenswrapper[4167]: I0217 15:00:08.618325 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:00:08.618440 master-0 kubenswrapper[4167]: I0217 15:00:08.618367 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:00:08.618440 master-0 kubenswrapper[4167]: I0217 15:00:08.618385 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:00:08.618440 master-0 kubenswrapper[4167]: I0217 15:00:08.618406 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:00:08.618440 master-0 kubenswrapper[4167]: I0217 15:00:08.618429 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:00:08.628607 master-0 kubenswrapper[4167]: I0217 15:00:08.628541 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:00:08.629365 master-0 kubenswrapper[4167]: I0217 15:00:08.629328 4167 server.go:1280] "Started kubelet" Feb 17 15:00:08.629466 master-0 kubenswrapper[4167]: I0217 15:00:08.629399 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:08.629763 master-0 kubenswrapper[4167]: I0217 15:00:08.629627 4167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:00:08.629829 master-0 kubenswrapper[4167]: I0217 15:00:08.629806 4167 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 17 15:00:08.629957 master-0 kubenswrapper[4167]: I0217 15:00:08.629801 4167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:00:08.630652 master-0 kubenswrapper[4167]: I0217 15:00:08.630604 4167 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:00:08.630984 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 17 15:00:08.632852 master-0 kubenswrapper[4167]: I0217 15:00:08.632740 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:00:08.632852 master-0 kubenswrapper[4167]: I0217 15:00:08.632774 4167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:00:08.633144 master-0 kubenswrapper[4167]: E0217 15:00:08.633024 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:08.633436 master-0 kubenswrapper[4167]: I0217 15:00:08.633404 4167 server.go:449] "Adding debug handlers to kubelet server" Feb 17 15:00:08.633723 master-0 kubenswrapper[4167]: I0217 15:00:08.633701 4167 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:00:08.633723 master-0 kubenswrapper[4167]: I0217 15:00:08.633721 4167 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:00:08.633833 master-0 kubenswrapper[4167]: I0217 15:00:08.633815 4167 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 17 15:00:08.633953 master-0 kubenswrapper[4167]: I0217 15:00:08.633928 4167 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:00:08.633989 master-0 kubenswrapper[4167]: I0217 15:00:08.633954 4167 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:00:08.634109 master-0 kubenswrapper[4167]: E0217 15:00:08.634078 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 17 15:00:08.634450 master-0 kubenswrapper[4167]: W0217 15:00:08.634348 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:08.635247 master-0 kubenswrapper[4167]: E0217 15:00:08.634616 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:08.642510 master-0 kubenswrapper[4167]: E0217 15:00:08.634324 4167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189510b778a4f402 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,LastTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:08.647008 master-0 kubenswrapper[4167]: I0217 15:00:08.646869 4167 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:00:08.647008 master-0 kubenswrapper[4167]: I0217 15:00:08.646918 4167 factory.go:55] Registering systemd factory Feb 17 15:00:08.647008 master-0 kubenswrapper[4167]: I0217 15:00:08.646935 4167 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:00:08.647345 master-0 kubenswrapper[4167]: E0217 15:00:08.647158 4167 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 17 15:00:08.647520 master-0 kubenswrapper[4167]: I0217 15:00:08.647489 4167 factory.go:153] Registering CRI-O factory Feb 17 15:00:08.647520 master-0 kubenswrapper[4167]: I0217 15:00:08.647511 4167 factory.go:221] Registration of the crio container factory successfully Feb 17 15:00:08.647619 master-0 kubenswrapper[4167]: I0217 15:00:08.647550 4167 factory.go:103] Registering Raw factory Feb 17 15:00:08.647619 master-0 kubenswrapper[4167]: I0217 15:00:08.647570 4167 manager.go:1196] Started watching for new ooms in manager Feb 17 15:00:08.648629 master-0 kubenswrapper[4167]: I0217 15:00:08.648605 4167 manager.go:319] Starting recovery of all containers Feb 17 15:00:08.663785 master-0 kubenswrapper[4167]: I0217 15:00:08.663742 4167 manager.go:324] Recovery completed Feb 17 15:00:08.672289 master-0 kubenswrapper[4167]: I0217 15:00:08.672249 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:08.674102 master-0 kubenswrapper[4167]: I0217 15:00:08.674019 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:08.674102 master-0 kubenswrapper[4167]: I0217 15:00:08.674111 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:08.674230 master-0 kubenswrapper[4167]: I0217 15:00:08.674126 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:08.675540 master-0 kubenswrapper[4167]: I0217 15:00:08.675497 4167 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:00:08.675540 master-0 kubenswrapper[4167]: I0217 15:00:08.675521 4167 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:00:08.675649 master-0 kubenswrapper[4167]: I0217 15:00:08.675546 4167 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:00:08.733881 master-0 kubenswrapper[4167]: E0217 15:00:08.733791 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:08.735505 master-0 kubenswrapper[4167]: I0217 15:00:08.735427 4167 policy_none.go:49] "None policy: Start" Feb 17 15:00:08.736917 master-0 kubenswrapper[4167]: I0217 15:00:08.736889 4167 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:00:08.736966 master-0 kubenswrapper[4167]: I0217 15:00:08.736928 4167 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: E0217 15:00:08.833990 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: E0217 15:00:08.835445 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.854141 4167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.856721 4167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.856773 4167 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.856806 4167 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: E0217 15:00:08.856869 4167 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: W0217 15:00:08.857954 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: E0217 15:00:08.858028 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.863976 4167 manager.go:334] "Starting Device Plugin manager" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.864085 4167 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.864115 4167 server.go:79] "Starting device plugin registration server" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.865246 4167 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.865276 4167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.865537 4167 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.865679 4167 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.865688 4167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: E0217 15:00:08.866777 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.958020 4167 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.958266 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.959982 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.960039 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.960057 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.960226 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.960810 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.960888 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961353 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961421 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961439 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961640 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961719 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961744 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961757 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961792 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.961808 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962779 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962814 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962831 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962842 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962869 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.003815 master-0 kubenswrapper[4167]: I0217 15:00:08.962885 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.963000 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.963155 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.963197 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.963991 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964032 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964053 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964091 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964124 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964144 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964196 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964352 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.964394 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965062 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965095 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965113 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965063 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965198 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965217 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965325 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965364 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.965389 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966292 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966322 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966338 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966366 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966512 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966559 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: I0217 15:00:08.966578 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.005405 master-0 kubenswrapper[4167]: E0217 15:00:08.967557 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:09.035804 master-0 kubenswrapper[4167]: I0217 15:00:09.035768 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.035854 master-0 kubenswrapper[4167]: I0217 15:00:09.035813 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.035854 master-0 kubenswrapper[4167]: I0217 15:00:09.035833 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.035916 master-0 kubenswrapper[4167]: I0217 15:00:09.035861 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.035996 master-0 kubenswrapper[4167]: I0217 15:00:09.035944 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.036066 master-0 kubenswrapper[4167]: I0217 15:00:09.036036 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.036101 master-0 kubenswrapper[4167]: I0217 15:00:09.036084 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.036214 master-0 kubenswrapper[4167]: I0217 15:00:09.036163 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.036255 master-0 kubenswrapper[4167]: I0217 15:00:09.036228 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.036285 master-0 kubenswrapper[4167]: I0217 15:00:09.036258 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.036316 master-0 kubenswrapper[4167]: I0217 15:00:09.036292 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.036344 master-0 kubenswrapper[4167]: I0217 15:00:09.036324 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.036384 master-0 kubenswrapper[4167]: I0217 15:00:09.036372 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.036432 master-0 kubenswrapper[4167]: I0217 15:00:09.036397 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.036432 master-0 kubenswrapper[4167]: I0217 15:00:09.036421 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.036541 master-0 kubenswrapper[4167]: I0217 15:00:09.036446 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.036541 master-0 kubenswrapper[4167]: I0217 15:00:09.036498 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.137077 master-0 kubenswrapper[4167]: I0217 15:00:09.136971 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.137077 master-0 kubenswrapper[4167]: I0217 15:00:09.137065 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.137294 master-0 kubenswrapper[4167]: I0217 15:00:09.137249 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.137412 master-0 kubenswrapper[4167]: I0217 15:00:09.137294 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.137412 master-0 kubenswrapper[4167]: I0217 15:00:09.137351 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.137412 master-0 kubenswrapper[4167]: I0217 15:00:09.137352 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137504 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137541 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137599 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137635 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137662 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137667 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137739 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137742 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137759 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137781 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.137781 master-0 kubenswrapper[4167]: I0217 15:00:09.137777 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137820 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137856 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137868 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137923 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137937 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.137985 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138125 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138241 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138305 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138306 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138347 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138396 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138406 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138441 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138505 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.138696 master-0 kubenswrapper[4167]: I0217 15:00:09.138555 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.168283 master-0 kubenswrapper[4167]: I0217 15:00:09.168186 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.170582 master-0 kubenswrapper[4167]: I0217 15:00:09.170519 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.170582 master-0 kubenswrapper[4167]: I0217 15:00:09.170583 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.170831 master-0 kubenswrapper[4167]: I0217 15:00:09.170609 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.170831 master-0 kubenswrapper[4167]: I0217 15:00:09.170699 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:09.172395 master-0 kubenswrapper[4167]: E0217 15:00:09.172322 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:09.236891 master-0 kubenswrapper[4167]: E0217 15:00:09.236809 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 17 15:00:09.294262 master-0 kubenswrapper[4167]: I0217 15:00:09.294101 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:09.327008 master-0 kubenswrapper[4167]: I0217 15:00:09.326924 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:00:09.342070 master-0 kubenswrapper[4167]: I0217 15:00:09.341996 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:00:09.372150 master-0 kubenswrapper[4167]: I0217 15:00:09.372024 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:00:09.380295 master-0 kubenswrapper[4167]: I0217 15:00:09.380140 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:09.572069 master-0 kubenswrapper[4167]: W0217 15:00:09.571741 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:09.572069 master-0 kubenswrapper[4167]: E0217 15:00:09.571847 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:09.573502 master-0 kubenswrapper[4167]: I0217 15:00:09.572711 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:09.574051 master-0 kubenswrapper[4167]: I0217 15:00:09.573997 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:09.574300 master-0 kubenswrapper[4167]: I0217 15:00:09.574056 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:09.574300 master-0 kubenswrapper[4167]: I0217 15:00:09.574074 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:09.574300 master-0 kubenswrapper[4167]: I0217 15:00:09.574131 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:09.574908 master-0 kubenswrapper[4167]: E0217 15:00:09.574850 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:09.632060 master-0 kubenswrapper[4167]: I0217 15:00:09.631937 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:09.787343 master-0 kubenswrapper[4167]: W0217 15:00:09.787211 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:09.787601 master-0 kubenswrapper[4167]: E0217 15:00:09.787342 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:09.840725 master-0 kubenswrapper[4167]: W0217 15:00:09.840193 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:09.840725 master-0 kubenswrapper[4167]: E0217 15:00:09.840354 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:10.038189 master-0 kubenswrapper[4167]: E0217 15:00:10.038077 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 17 15:00:10.273412 master-0 kubenswrapper[4167]: W0217 15:00:10.273250 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:10.273412 master-0 kubenswrapper[4167]: E0217 15:00:10.273376 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:10.376019 master-0 kubenswrapper[4167]: I0217 15:00:10.375887 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:10.377494 master-0 kubenswrapper[4167]: I0217 15:00:10.377416 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:10.377593 master-0 kubenswrapper[4167]: I0217 15:00:10.377521 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:10.377593 master-0 kubenswrapper[4167]: I0217 15:00:10.377549 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:10.377693 master-0 kubenswrapper[4167]: I0217 15:00:10.377644 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:10.379780 master-0 kubenswrapper[4167]: E0217 15:00:10.379699 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:10.396610 master-0 kubenswrapper[4167]: W0217 15:00:10.395574 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3322fd3717f4aec0d8f54ec7862c07e.slice/crio-77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90 WatchSource:0}: Error finding container 77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90: Status 404 returned error can't find the container with id 77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90 Feb 17 15:00:10.420346 master-0 kubenswrapper[4167]: I0217 15:00:10.420064 4167 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:00:10.478351 master-0 kubenswrapper[4167]: W0217 15:00:10.478264 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1e91e5a1fed5cf7076a92d2830d36f.slice/crio-79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341 WatchSource:0}: Error finding container 79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341: Status 404 returned error can't find the container with id 79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341 Feb 17 15:00:10.526999 master-0 kubenswrapper[4167]: I0217 15:00:10.526827 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:00:10.528567 master-0 kubenswrapper[4167]: E0217 15:00:10.528494 4167 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:10.536764 master-0 kubenswrapper[4167]: W0217 15:00:10.536705 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd WatchSource:0}: Error finding container 6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd: Status 404 returned error can't find the container with id 6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd Feb 17 15:00:10.632217 master-0 kubenswrapper[4167]: I0217 15:00:10.632110 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:10.650755 master-0 kubenswrapper[4167]: W0217 15:00:10.650705 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427 WatchSource:0}: Error finding container 4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427: Status 404 returned error can't find the container with id 4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427 Feb 17 15:00:10.864513 master-0 kubenswrapper[4167]: I0217 15:00:10.864397 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"933619de776a30ee8db83753fa79bb4994c3f6de2f880c843e582119c60f8f70"} Feb 17 15:00:10.865242 master-0 kubenswrapper[4167]: I0217 15:00:10.865209 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90"} Feb 17 15:00:10.866358 master-0 kubenswrapper[4167]: I0217 15:00:10.866269 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427"} Feb 17 15:00:10.867207 master-0 kubenswrapper[4167]: I0217 15:00:10.867169 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd"} Feb 17 15:00:10.868266 master-0 kubenswrapper[4167]: I0217 15:00:10.868227 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341"} Feb 17 15:00:11.630510 master-0 kubenswrapper[4167]: I0217 15:00:11.630429 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:11.640325 master-0 kubenswrapper[4167]: E0217 15:00:11.640226 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 17 15:00:11.724838 master-0 kubenswrapper[4167]: W0217 15:00:11.724748 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:11.724838 master-0 kubenswrapper[4167]: E0217 15:00:11.724821 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:11.736668 master-0 kubenswrapper[4167]: W0217 15:00:11.736606 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:11.736668 master-0 kubenswrapper[4167]: E0217 15:00:11.736669 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:11.980522 master-0 kubenswrapper[4167]: I0217 15:00:11.980389 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:11.981693 master-0 kubenswrapper[4167]: I0217 15:00:11.981658 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:11.981760 master-0 kubenswrapper[4167]: I0217 15:00:11.981717 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:11.981760 master-0 kubenswrapper[4167]: I0217 15:00:11.981730 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:11.981845 master-0 kubenswrapper[4167]: I0217 15:00:11.981784 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:11.982893 master-0 kubenswrapper[4167]: E0217 15:00:11.982827 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:12.380448 master-0 kubenswrapper[4167]: W0217 15:00:12.380345 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:12.380448 master-0 kubenswrapper[4167]: E0217 15:00:12.380443 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:12.427715 master-0 kubenswrapper[4167]: W0217 15:00:12.427624 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:12.427715 master-0 kubenswrapper[4167]: E0217 15:00:12.427673 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:12.630883 master-0 kubenswrapper[4167]: I0217 15:00:12.630684 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:13.630754 master-0 kubenswrapper[4167]: I0217 15:00:13.630700 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:13.875936 master-0 kubenswrapper[4167]: I0217 15:00:13.875869 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20" exitCode=0 Feb 17 15:00:13.875936 master-0 kubenswrapper[4167]: I0217 15:00:13.875921 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20"} Feb 17 15:00:13.876172 master-0 kubenswrapper[4167]: I0217 15:00:13.876009 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:13.876798 master-0 kubenswrapper[4167]: I0217 15:00:13.876771 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:13.876798 master-0 kubenswrapper[4167]: I0217 15:00:13.876798 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:13.876869 master-0 kubenswrapper[4167]: I0217 15:00:13.876806 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:14.631285 master-0 kubenswrapper[4167]: I0217 15:00:14.631201 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:14.757063 master-0 kubenswrapper[4167]: I0217 15:00:14.756913 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:00:14.758312 master-0 kubenswrapper[4167]: E0217 15:00:14.758276 4167 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:14.842296 master-0 kubenswrapper[4167]: E0217 15:00:14.842212 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 17 15:00:14.879735 master-0 kubenswrapper[4167]: I0217 15:00:14.879684 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 17 15:00:14.880106 master-0 kubenswrapper[4167]: I0217 15:00:14.880067 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="b49e21b31379779c4bfb8986130508b5a67496d0b6ff1b3b5d75a48b6743c94f" exitCode=1 Feb 17 15:00:14.880106 master-0 kubenswrapper[4167]: I0217 15:00:14.880100 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"b49e21b31379779c4bfb8986130508b5a67496d0b6ff1b3b5d75a48b6743c94f"} Feb 17 15:00:14.880184 master-0 kubenswrapper[4167]: I0217 15:00:14.880161 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:14.881109 master-0 kubenswrapper[4167]: I0217 15:00:14.881083 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:14.881194 master-0 kubenswrapper[4167]: I0217 15:00:14.881121 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:14.881194 master-0 kubenswrapper[4167]: I0217 15:00:14.881131 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:14.881498 master-0 kubenswrapper[4167]: I0217 15:00:14.881377 4167 scope.go:117] "RemoveContainer" containerID="b49e21b31379779c4bfb8986130508b5a67496d0b6ff1b3b5d75a48b6743c94f" Feb 17 15:00:15.183828 master-0 kubenswrapper[4167]: I0217 15:00:15.183650 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:15.185145 master-0 kubenswrapper[4167]: I0217 15:00:15.185114 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:15.185227 master-0 kubenswrapper[4167]: I0217 15:00:15.185161 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:15.185227 master-0 kubenswrapper[4167]: I0217 15:00:15.185174 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:15.185227 master-0 kubenswrapper[4167]: I0217 15:00:15.185215 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:15.186571 master-0 kubenswrapper[4167]: E0217 15:00:15.186442 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 17 15:00:15.630756 master-0 kubenswrapper[4167]: I0217 15:00:15.630707 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:15.780214 master-0 kubenswrapper[4167]: W0217 15:00:15.780010 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:15.780214 master-0 kubenswrapper[4167]: E0217 15:00:15.780132 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:15.884669 master-0 kubenswrapper[4167]: I0217 15:00:15.884206 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 17 15:00:15.885056 master-0 kubenswrapper[4167]: I0217 15:00:15.885021 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 17 15:00:15.885912 master-0 kubenswrapper[4167]: I0217 15:00:15.885860 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551" exitCode=1 Feb 17 15:00:15.885982 master-0 kubenswrapper[4167]: I0217 15:00:15.885940 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:15.886025 master-0 kubenswrapper[4167]: I0217 15:00:15.885924 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551"} Feb 17 15:00:15.886088 master-0 kubenswrapper[4167]: I0217 15:00:15.886063 4167 scope.go:117] "RemoveContainer" containerID="b49e21b31379779c4bfb8986130508b5a67496d0b6ff1b3b5d75a48b6743c94f" Feb 17 15:00:15.886688 master-0 kubenswrapper[4167]: I0217 15:00:15.886663 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:15.886737 master-0 kubenswrapper[4167]: I0217 15:00:15.886697 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:15.886737 master-0 kubenswrapper[4167]: I0217 15:00:15.886711 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:15.887069 master-0 kubenswrapper[4167]: I0217 15:00:15.887044 4167 scope.go:117] "RemoveContainer" containerID="6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551" Feb 17 15:00:15.887276 master-0 kubenswrapper[4167]: E0217 15:00:15.887233 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 17 15:00:16.176042 master-0 kubenswrapper[4167]: W0217 15:00:16.175853 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:16.176042 master-0 kubenswrapper[4167]: E0217 15:00:16.175934 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:16.176042 master-0 kubenswrapper[4167]: E0217 15:00:16.175871 4167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189510b778a4f402 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,LastTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:16.631353 master-0 kubenswrapper[4167]: I0217 15:00:16.631272 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:16.917747 master-0 kubenswrapper[4167]: W0217 15:00:16.917597 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:16.917747 master-0 kubenswrapper[4167]: E0217 15:00:16.917690 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:16.918300 master-0 kubenswrapper[4167]: I0217 15:00:16.917859 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:16.919163 master-0 kubenswrapper[4167]: I0217 15:00:16.919065 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:16.919163 master-0 kubenswrapper[4167]: I0217 15:00:16.919106 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:16.919163 master-0 kubenswrapper[4167]: I0217 15:00:16.919117 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:16.919479 master-0 kubenswrapper[4167]: I0217 15:00:16.919435 4167 scope.go:117] "RemoveContainer" containerID="6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551" Feb 17 15:00:16.919729 master-0 kubenswrapper[4167]: E0217 15:00:16.919670 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 17 15:00:17.266442 master-0 kubenswrapper[4167]: W0217 15:00:17.266236 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:17.266442 master-0 kubenswrapper[4167]: E0217 15:00:17.266329 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:00:17.630744 master-0 kubenswrapper[4167]: I0217 15:00:17.630676 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:18.631108 master-0 kubenswrapper[4167]: I0217 15:00:18.631033 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:18.867001 master-0 kubenswrapper[4167]: E0217 15:00:18.866904 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:19.631916 master-0 kubenswrapper[4167]: I0217 15:00:19.631715 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:19.926786 master-0 kubenswrapper[4167]: I0217 15:00:19.926579 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 17 15:00:20.631112 master-0 kubenswrapper[4167]: I0217 15:00:20.631051 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 17 15:00:20.932027 master-0 kubenswrapper[4167]: I0217 15:00:20.931936 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc"} Feb 17 15:00:20.932027 master-0 kubenswrapper[4167]: I0217 15:00:20.931977 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:20.933158 master-0 kubenswrapper[4167]: I0217 15:00:20.933120 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:20.933158 master-0 kubenswrapper[4167]: I0217 15:00:20.933158 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:20.933242 master-0 kubenswrapper[4167]: I0217 15:00:20.933172 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:20.935940 master-0 kubenswrapper[4167]: I0217 15:00:20.935881 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"8105fa4b966940334c286ed94a1f0129c72a04a09b1bf683900cc1744fb06fec"} Feb 17 15:00:20.936113 master-0 kubenswrapper[4167]: I0217 15:00:20.936034 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:20.936185 master-0 kubenswrapper[4167]: I0217 15:00:20.935930 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"4d0630e2330edb92a7d17fc9b9a41a0b13733df95ae437b7fe0b5957cb60ed7a"} Feb 17 15:00:20.938479 master-0 kubenswrapper[4167]: I0217 15:00:20.936819 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:20.938479 master-0 kubenswrapper[4167]: I0217 15:00:20.936877 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:20.938479 master-0 kubenswrapper[4167]: I0217 15:00:20.936903 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:20.941247 master-0 kubenswrapper[4167]: I0217 15:00:20.941046 4167 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a" exitCode=0 Feb 17 15:00:20.941247 master-0 kubenswrapper[4167]: I0217 15:00:20.941138 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a"} Feb 17 15:00:20.941359 master-0 kubenswrapper[4167]: I0217 15:00:20.941242 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:20.942554 master-0 kubenswrapper[4167]: I0217 15:00:20.942382 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:20.942554 master-0 kubenswrapper[4167]: I0217 15:00:20.942445 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:20.942554 master-0 kubenswrapper[4167]: I0217 15:00:20.942473 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:20.945502 master-0 kubenswrapper[4167]: I0217 15:00:20.943514 4167 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9" exitCode=1 Feb 17 15:00:20.945502 master-0 kubenswrapper[4167]: I0217 15:00:20.943551 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9"} Feb 17 15:00:20.945502 master-0 kubenswrapper[4167]: I0217 15:00:20.945238 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:20.945940 master-0 kubenswrapper[4167]: I0217 15:00:20.945916 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:20.945940 master-0 kubenswrapper[4167]: I0217 15:00:20.945938 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:20.946023 master-0 kubenswrapper[4167]: I0217 15:00:20.945946 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:21.243586 master-0 kubenswrapper[4167]: E0217 15:00:21.243492 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 17 15:00:21.587640 master-0 kubenswrapper[4167]: I0217 15:00:21.587380 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:21.589218 master-0 kubenswrapper[4167]: I0217 15:00:21.589124 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:21.589361 master-0 kubenswrapper[4167]: I0217 15:00:21.589237 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:21.589361 master-0 kubenswrapper[4167]: I0217 15:00:21.589257 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:21.589361 master-0 kubenswrapper[4167]: I0217 15:00:21.589349 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:21.953498 master-0 kubenswrapper[4167]: I0217 15:00:21.953393 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6"} Feb 17 15:00:21.953498 master-0 kubenswrapper[4167]: I0217 15:00:21.953444 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:21.953498 master-0 kubenswrapper[4167]: I0217 15:00:21.953476 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:21.954542 master-0 kubenswrapper[4167]: I0217 15:00:21.954516 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:21.954585 master-0 kubenswrapper[4167]: I0217 15:00:21.954558 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:21.954585 master-0 kubenswrapper[4167]: I0217 15:00:21.954574 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:21.954734 master-0 kubenswrapper[4167]: I0217 15:00:21.954696 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:21.954734 master-0 kubenswrapper[4167]: I0217 15:00:21.954733 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:21.954797 master-0 kubenswrapper[4167]: I0217 15:00:21.954747 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:22.715815 master-0 kubenswrapper[4167]: E0217 15:00:22.713438 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 17 15:00:22.715815 master-0 kubenswrapper[4167]: I0217 15:00:22.713639 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:23.271851 master-0 kubenswrapper[4167]: I0217 15:00:23.271693 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:00:23.289338 master-0 kubenswrapper[4167]: I0217 15:00:23.289283 4167 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:00:23.649952 master-0 kubenswrapper[4167]: I0217 15:00:23.649903 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:23.791612 master-0 kubenswrapper[4167]: W0217 15:00:23.791571 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 17 15:00:23.791612 master-0 kubenswrapper[4167]: E0217 15:00:23.791617 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:23.960089 master-0 kubenswrapper[4167]: I0217 15:00:23.959264 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"8e4f485693ac9a91f7bc7a84cdde902f639454acfd53f8608408575f632d2ecf"} Feb 17 15:00:23.960089 master-0 kubenswrapper[4167]: I0217 15:00:23.959356 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:23.960376 master-0 kubenswrapper[4167]: I0217 15:00:23.960336 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:23.960448 master-0 kubenswrapper[4167]: I0217 15:00:23.960384 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:23.960448 master-0 kubenswrapper[4167]: I0217 15:00:23.960397 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:23.960760 master-0 kubenswrapper[4167]: I0217 15:00:23.960730 4167 scope.go:117] "RemoveContainer" containerID="65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9" Feb 17 15:00:24.399296 master-0 kubenswrapper[4167]: I0217 15:00:24.399237 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:24.408143 master-0 kubenswrapper[4167]: I0217 15:00:24.408082 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:24.634607 master-0 kubenswrapper[4167]: I0217 15:00:24.634518 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:24.964713 master-0 kubenswrapper[4167]: I0217 15:00:24.964517 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb"} Feb 17 15:00:24.964713 master-0 kubenswrapper[4167]: I0217 15:00:24.964598 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:24.964713 master-0 kubenswrapper[4167]: I0217 15:00:24.964563 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:24.966371 master-0 kubenswrapper[4167]: I0217 15:00:24.966331 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:24.966442 master-0 kubenswrapper[4167]: I0217 15:00:24.966375 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:24.966442 master-0 kubenswrapper[4167]: I0217 15:00:24.966392 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:24.968156 master-0 kubenswrapper[4167]: I0217 15:00:24.968126 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b"} Feb 17 15:00:24.968245 master-0 kubenswrapper[4167]: I0217 15:00:24.968212 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:24.969054 master-0 kubenswrapper[4167]: I0217 15:00:24.969020 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:24.969054 master-0 kubenswrapper[4167]: I0217 15:00:24.969052 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:24.969155 master-0 kubenswrapper[4167]: I0217 15:00:24.969065 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:25.193971 master-0 kubenswrapper[4167]: I0217 15:00:25.193900 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:25.198958 master-0 kubenswrapper[4167]: I0217 15:00:25.198919 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:25.637896 master-0 kubenswrapper[4167]: I0217 15:00:25.637817 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:25.971793 master-0 kubenswrapper[4167]: I0217 15:00:25.971617 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:25.972015 master-0 kubenswrapper[4167]: I0217 15:00:25.971813 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:25.972015 master-0 kubenswrapper[4167]: I0217 15:00:25.971617 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:25.973061 master-0 kubenswrapper[4167]: I0217 15:00:25.972865 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:25.973139 master-0 kubenswrapper[4167]: I0217 15:00:25.973123 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:25.973188 master-0 kubenswrapper[4167]: I0217 15:00:25.972971 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:25.973219 master-0 kubenswrapper[4167]: I0217 15:00:25.973188 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:25.973219 master-0 kubenswrapper[4167]: I0217 15:00:25.973206 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:25.973725 master-0 kubenswrapper[4167]: I0217 15:00:25.973678 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:25.978766 master-0 kubenswrapper[4167]: I0217 15:00:25.978717 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:00:26.131526 master-0 kubenswrapper[4167]: W0217 15:00:26.131400 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 17 15:00:26.131526 master-0 kubenswrapper[4167]: E0217 15:00:26.131503 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:26.175381 master-0 kubenswrapper[4167]: I0217 15:00:26.175239 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:26.182182 master-0 kubenswrapper[4167]: E0217 15:00:26.182049 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b778a4f402 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,LastTimestamp:2026-02-17 15:00:08.629294082 +0000 UTC m=+1.163958904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.186572 master-0 kubenswrapper[4167]: E0217 15:00:26.186345 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.193898 master-0 kubenswrapper[4167]: E0217 15:00:26.193669 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.199568 master-0 kubenswrapper[4167]: E0217 15:00:26.199338 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.205363 master-0 kubenswrapper[4167]: E0217 15:00:26.205164 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b786cd5bdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.866823135 +0000 UTC m=+1.401487937,LastTimestamp:2026-02-17 15:00:08.866823135 +0000 UTC m=+1.401487937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.215604 master-0 kubenswrapper[4167]: E0217 15:00:26.215330 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.960013955 +0000 UTC m=+1.494678797,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.221559 master-0 kubenswrapper[4167]: E0217 15:00:26.221390 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.960049946 +0000 UTC m=+1.494714778,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.225808 master-0 kubenswrapper[4167]: E0217 15:00:26.225672 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.960067136 +0000 UTC m=+1.494731968,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.230183 master-0 kubenswrapper[4167]: E0217 15:00:26.230079 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.961396146 +0000 UTC m=+1.496060988,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.233979 master-0 kubenswrapper[4167]: E0217 15:00:26.233894 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.961432667 +0000 UTC m=+1.496097509,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.238406 master-0 kubenswrapper[4167]: E0217 15:00:26.238232 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.961449387 +0000 UTC m=+1.496114229,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.245383 master-0 kubenswrapper[4167]: E0217 15:00:26.245273 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.961782595 +0000 UTC m=+1.496447427,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.249703 master-0 kubenswrapper[4167]: E0217 15:00:26.249617 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.961802515 +0000 UTC m=+1.496467347,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.254172 master-0 kubenswrapper[4167]: E0217 15:00:26.253990 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.961817726 +0000 UTC m=+1.496482558,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.259722 master-0 kubenswrapper[4167]: E0217 15:00:26.259559 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.962803707 +0000 UTC m=+1.497468539,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.264566 master-0 kubenswrapper[4167]: E0217 15:00:26.264420 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.962824408 +0000 UTC m=+1.497489250,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.269179 master-0 kubenswrapper[4167]: E0217 15:00:26.269044 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.962840968 +0000 UTC m=+1.497505810,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.277297 master-0 kubenswrapper[4167]: E0217 15:00:26.277126 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.962861168 +0000 UTC m=+1.497526000,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.282114 master-0 kubenswrapper[4167]: E0217 15:00:26.281913 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.962879789 +0000 UTC m=+1.497544621,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.286115 master-0 kubenswrapper[4167]: E0217 15:00:26.286019 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.962894649 +0000 UTC m=+1.497559491,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.290972 master-0 kubenswrapper[4167]: E0217 15:00:26.290799 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.964010244 +0000 UTC m=+1.498675086,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.295364 master-0 kubenswrapper[4167]: E0217 15:00:26.295222 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.964043475 +0000 UTC m=+1.498708317,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.299492 master-0 kubenswrapper[4167]: E0217 15:00:26.299360 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b512c21\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b512c21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674135073 +0000 UTC m=+1.208799885,LastTimestamp:2026-02-17 15:00:08.964063875 +0000 UTC m=+1.498728717,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.304340 master-0 kubenswrapper[4167]: E0217 15:00:26.304215 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b507e60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b507e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674090592 +0000 UTC m=+1.208755404,LastTimestamp:2026-02-17 15:00:08.964114346 +0000 UTC m=+1.498779188,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.309809 master-0 kubenswrapper[4167]: E0217 15:00:26.309536 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189510b77b50f409\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189510b77b50f409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:08.674120713 +0000 UTC m=+1.208785525,LastTimestamp:2026-02-17 15:00:08.964134967 +0000 UTC m=+1.498799799,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.316875 master-0 kubenswrapper[4167]: E0217 15:00:26.316698 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b7e36098cb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:10.419976395 +0000 UTC m=+2.954641237,LastTimestamp:2026-02-17 15:00:10.419976395 +0000 UTC m=+2.954641237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.321727 master-0 kubenswrapper[4167]: E0217 15:00:26.321592 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510b7e4ef755d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:10.446116189 +0000 UTC m=+2.980781031,LastTimestamp:2026-02-17 15:00:10.446116189 +0000 UTC m=+2.980781031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.325648 master-0 kubenswrapper[4167]: E0217 15:00:26.325532 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510b7e7102bae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:10.481814446 +0000 UTC m=+3.016479258,LastTimestamp:2026-02-17 15:00:10.481814446 +0000 UTC m=+3.016479258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.329644 master-0 kubenswrapper[4167]: E0217 15:00:26.329511 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510b7ea869a63 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:10.539907683 +0000 UTC m=+3.074572485,LastTimestamp:2026-02-17 15:00:10.539907683 +0000 UTC m=+3.074572485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.334302 master-0 kubenswrapper[4167]: E0217 15:00:26.334208 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189510b7f1405c4d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:10.652744781 +0000 UTC m=+3.187409623,LastTimestamp:2026-02-17 15:00:10.652744781 +0000 UTC m=+3.187409623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.340488 master-0 kubenswrapper[4167]: E0217 15:00:26.340379 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8982b683f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" in 3.033s (3.033s including waiting). Image size: 459915626 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.453166655 +0000 UTC m=+5.987831457,LastTimestamp:2026-02-17 15:00:13.453166655 +0000 UTC m=+5.987831457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.343946 master-0 kubenswrapper[4167]: E0217 15:00:26.343821 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8a59b1a67 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.678590567 +0000 UTC m=+6.213255369,LastTimestamp:2026-02-17 15:00:13.678590567 +0000 UTC m=+6.213255369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.347764 master-0 kubenswrapper[4167]: E0217 15:00:26.347665 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8aa25833f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.754770239 +0000 UTC m=+6.289435041,LastTimestamp:2026-02-17 15:00:13.754770239 +0000 UTC m=+6.289435041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.352471 master-0 kubenswrapper[4167]: E0217 15:00:26.352274 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8b2cbcd49 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.899885897 +0000 UTC m=+6.434550699,LastTimestamp:2026-02-17 15:00:13.899885897 +0000 UTC m=+6.434550699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.357565 master-0 kubenswrapper[4167]: E0217 15:00:26.357413 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bdf528b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.087145648 +0000 UTC m=+6.621810450,LastTimestamp:2026-02-17 15:00:14.087145648 +0000 UTC m=+6.621810450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.362780 master-0 kubenswrapper[4167]: E0217 15:00:26.362605 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bea235a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.098486691 +0000 UTC m=+6.633151493,LastTimestamp:2026-02-17 15:00:14.098486691 +0000 UTC m=+6.633151493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.368093 master-0 kubenswrapper[4167]: E0217 15:00:26.367910 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8b2cbcd49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8b2cbcd49 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.899885897 +0000 UTC m=+6.434550699,LastTimestamp:2026-02-17 15:00:14.884278083 +0000 UTC m=+7.418942885,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.372751 master-0 kubenswrapper[4167]: E0217 15:00:26.372649 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8bdf528b0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bdf528b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.087145648 +0000 UTC m=+6.621810450,LastTimestamp:2026-02-17 15:00:15.050903072 +0000 UTC m=+7.585567874,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.377094 master-0 kubenswrapper[4167]: E0217 15:00:26.376918 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8bea235a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bea235a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.098486691 +0000 UTC m=+6.633151493,LastTimestamp:2026-02-17 15:00:15.063763279 +0000 UTC m=+7.598428081,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.381577 master-0 kubenswrapper[4167]: E0217 15:00:26.381440 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b9293f73f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:15.88717669 +0000 UTC m=+8.421841492,LastTimestamp:2026-02-17 15:00:15.88717669 +0000 UTC m=+8.421841492,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.385696 master-0 kubenswrapper[4167]: E0217 15:00:26.385621 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b9293f73f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b9293f73f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:15.88717669 +0000 UTC m=+8.421841492,LastTimestamp:2026-02-17 15:00:16.919588307 +0000 UTC m=+9.454253109,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.389670 master-0 kubenswrapper[4167]: E0217 15:00:26.389496 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba143da374 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" in 9.289s (9.289s including waiting). Image size: 524042902 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:19.82970354 +0000 UTC m=+12.364368342,LastTimestamp:2026-02-17 15:00:19.82970354 +0000 UTC m=+12.364368342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.393957 master-0 kubenswrapper[4167]: E0217 15:00:26.393848 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba1b1bf800 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 9.498s (9.498s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:19.944937472 +0000 UTC m=+12.479602274,LastTimestamp:2026-02-17 15:00:19.944937472 +0000 UTC m=+12.479602274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.397915 master-0 kubenswrapper[4167]: E0217 15:00:26.397823 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba1e5ec74c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 9.517s (9.517s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:19.999647564 +0000 UTC m=+12.534312386,LastTimestamp:2026-02-17 15:00:19.999647564 +0000 UTC m=+12.534312386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.401731 master-0 kubenswrapper[4167]: E0217 15:00:26.401582 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba22f40a48 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.07653844 +0000 UTC m=+12.611203262,LastTimestamp:2026-02-17 15:00:20.07653844 +0000 UTC m=+12.611203262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.405400 master-0 kubenswrapper[4167]: E0217 15:00:26.405297 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba25746ac1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.118506177 +0000 UTC m=+12.653170979,LastTimestamp:2026-02-17 15:00:20.118506177 +0000 UTC m=+12.653170979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.409742 master-0 kubenswrapper[4167]: E0217 15:00:26.409629 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba25f446c6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.126885574 +0000 UTC m=+12.661550376,LastTimestamp:2026-02-17 15:00:20.126885574 +0000 UTC m=+12.661550376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.413048 master-0 kubenswrapper[4167]: E0217 15:00:26.412912 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189510ba285762da kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 9.514s (9.514s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.166935258 +0000 UTC m=+12.701600090,LastTimestamp:2026-02-17 15:00:20.166935258 +0000 UTC m=+12.701600090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.418980 master-0 kubenswrapper[4167]: E0217 15:00:26.418821 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba28d540e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.175184102 +0000 UTC m=+12.709848934,LastTimestamp:2026-02-17 15:00:20.175184102 +0000 UTC m=+12.709848934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.423521 master-0 kubenswrapper[4167]: E0217 15:00:26.423329 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba290aa05e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.17868195 +0000 UTC m=+12.713346752,LastTimestamp:2026-02-17 15:00:20.17868195 +0000 UTC m=+12.713346752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.427300 master-0 kubenswrapper[4167]: E0217 15:00:26.427176 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba2cd431c6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.242223558 +0000 UTC m=+12.776888360,LastTimestamp:2026-02-17 15:00:20.242223558 +0000 UTC m=+12.776888360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.431104 master-0 kubenswrapper[4167]: E0217 15:00:26.430978 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba2cf1145b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.244116571 +0000 UTC m=+12.778781373,LastTimestamp:2026-02-17 15:00:20.244116571 +0000 UTC m=+12.778781373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.434744 master-0 kubenswrapper[4167]: E0217 15:00:26.434597 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba2da03c39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.255595577 +0000 UTC m=+12.790260379,LastTimestamp:2026-02-17 15:00:20.255595577 +0000 UTC m=+12.790260379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.438301 master-0 kubenswrapper[4167]: E0217 15:00:26.438192 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189510ba34ea193d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.377876797 +0000 UTC m=+12.912541609,LastTimestamp:2026-02-17 15:00:20.377876797 +0000 UTC m=+12.912541609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.442438 master-0 kubenswrapper[4167]: E0217 15:00:26.442345 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba35596619 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.385170969 +0000 UTC m=+12.919835771,LastTimestamp:2026-02-17 15:00:20.385170969 +0000 UTC m=+12.919835771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.446068 master-0 kubenswrapper[4167]: E0217 15:00:26.445942 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189510ba377580dc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.42056726 +0000 UTC m=+12.955232082,LastTimestamp:2026-02-17 15:00:20.42056726 +0000 UTC m=+12.955232082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.450626 master-0 kubenswrapper[4167]: E0217 15:00:26.450558 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ba387850cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.437528779 +0000 UTC m=+12.972193591,LastTimestamp:2026-02-17 15:00:20.437528779 +0000 UTC m=+12.972193591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.454528 master-0 kubenswrapper[4167]: E0217 15:00:26.454426 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba56ba5d96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.94517391 +0000 UTC m=+13.479838712,LastTimestamp:2026-02-17 15:00:20.94517391 +0000 UTC m=+13.479838712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.459603 master-0 kubenswrapper[4167]: E0217 15:00:26.459495 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba68c046fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:21.24755123 +0000 UTC m=+13.782216032,LastTimestamp:2026-02-17 15:00:21.24755123 +0000 UTC m=+13.782216032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.464317 master-0 kubenswrapper[4167]: E0217 15:00:26.464197 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba6ae74669 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:21.283661417 +0000 UTC m=+13.818326209,LastTimestamp:2026-02-17 15:00:21.283661417 +0000 UTC m=+13.818326209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.469014 master-0 kubenswrapper[4167]: E0217 15:00:26.468878 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510ba6afc8354 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:21.285053268 +0000 UTC m=+13.819718070,LastTimestamp:2026-02-17 15:00:21.285053268 +0000 UTC m=+13.819718070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.474175 master-0 kubenswrapper[4167]: E0217 15:00:26.474053 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510bac6cf19a7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\" in 2.581s (2.581s including waiting). Image size: 500068323 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:22.825580967 +0000 UTC m=+15.360245769,LastTimestamp:2026-02-17 15:00:22.825580967 +0000 UTC m=+15.360245769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.478008 master-0 kubenswrapper[4167]: E0217 15:00:26.477859 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510bad6849bb9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:23.089134521 +0000 UTC m=+15.623799323,LastTimestamp:2026-02-17 15:00:23.089134521 +0000 UTC m=+15.623799323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.481969 master-0 kubenswrapper[4167]: E0217 15:00:26.481885 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510bad8c21a21 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:23.126719009 +0000 UTC m=+15.661383811,LastTimestamp:2026-02-17 15:00:23.126719009 +0000 UTC m=+15.661383811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.486424 master-0 kubenswrapper[4167]: E0217 15:00:26.486293 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510bb0e0b8b6b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:24.020724587 +0000 UTC m=+16.555389429,LastTimestamp:2026-02-17 15:00:24.020724587 +0000 UTC m=+16.555389429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.490824 master-0 kubenswrapper[4167]: E0217 15:00:26.490698 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510bb1981cc7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" in 2.927s (2.927s including waiting). Image size: 509806416 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:24.213023869 +0000 UTC m=+16.747688671,LastTimestamp:2026-02-17 15:00:24.213023869 +0000 UTC m=+16.747688671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.495619 master-0 kubenswrapper[4167]: E0217 15:00:26.495413 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189510ba28d540e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba28d540e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.175184102 +0000 UTC m=+12.709848934,LastTimestamp:2026-02-17 15:00:24.213189873 +0000 UTC m=+16.747854675,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.499777 master-0 kubenswrapper[4167]: E0217 15:00:26.499680 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189510ba2cd431c6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510ba2cd431c6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:20.242223558 +0000 UTC m=+12.776888360,LastTimestamp:2026-02-17 15:00:24.242384855 +0000 UTC m=+16.777049657,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.503335 master-0 kubenswrapper[4167]: E0217 15:00:26.503227 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510bb2449057e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:24.393852286 +0000 UTC m=+16.928517088,LastTimestamp:2026-02-17 15:00:24.393852286 +0000 UTC m=+16.928517088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.506787 master-0 kubenswrapper[4167]: E0217 15:00:26.506703 4167 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189510bb25ea3826 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:24.421193766 +0000 UTC m=+16.955858578,LastTimestamp:2026-02-17 15:00:24.421193766 +0000 UTC m=+16.955858578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:26.554439 master-0 kubenswrapper[4167]: W0217 15:00:26.554363 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 17 15:00:26.554439 master-0 kubenswrapper[4167]: E0217 15:00:26.554417 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:26.634563 master-0 kubenswrapper[4167]: I0217 15:00:26.634441 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:26.973776 master-0 kubenswrapper[4167]: I0217 15:00:26.973665 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:26.973776 master-0 kubenswrapper[4167]: I0217 15:00:26.973771 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:26.974941 master-0 kubenswrapper[4167]: I0217 15:00:26.974880 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:26.975065 master-0 kubenswrapper[4167]: I0217 15:00:26.974926 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:26.975065 master-0 kubenswrapper[4167]: I0217 15:00:26.974966 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:26.975228 master-0 kubenswrapper[4167]: I0217 15:00:26.975096 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:26.975228 master-0 kubenswrapper[4167]: I0217 15:00:26.975148 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:26.975228 master-0 kubenswrapper[4167]: I0217 15:00:26.975170 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:27.635084 master-0 kubenswrapper[4167]: I0217 15:00:27.635028 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:27.857071 master-0 kubenswrapper[4167]: I0217 15:00:27.856987 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:27.858050 master-0 kubenswrapper[4167]: I0217 15:00:27.857974 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:27.858050 master-0 kubenswrapper[4167]: I0217 15:00:27.858014 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:27.858050 master-0 kubenswrapper[4167]: I0217 15:00:27.858026 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:27.858445 master-0 kubenswrapper[4167]: I0217 15:00:27.858389 4167 scope.go:117] "RemoveContainer" containerID="6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551" Feb 17 15:00:27.869825 master-0 kubenswrapper[4167]: E0217 15:00:27.869630 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8b2cbcd49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8b2cbcd49 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:13.899885897 +0000 UTC m=+6.434550699,LastTimestamp:2026-02-17 15:00:27.860472032 +0000 UTC m=+20.395136854,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:27.977257 master-0 kubenswrapper[4167]: I0217 15:00:27.976780 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:27.978255 master-0 kubenswrapper[4167]: I0217 15:00:27.978214 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:27.978255 master-0 kubenswrapper[4167]: I0217 15:00:27.978246 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:27.978255 master-0 kubenswrapper[4167]: I0217 15:00:27.978256 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:28.113246 master-0 kubenswrapper[4167]: E0217 15:00:28.113060 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8bdf528b0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bdf528b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.087145648 +0000 UTC m=+6.621810450,LastTimestamp:2026-02-17 15:00:28.105738517 +0000 UTC m=+20.640403319,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:28.157901 master-0 kubenswrapper[4167]: E0217 15:00:28.157725 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b8bea235a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b8bea235a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:14.098486691 +0000 UTC m=+6.633151493,LastTimestamp:2026-02-17 15:00:28.15241634 +0000 UTC m=+20.687081152,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:28.250838 master-0 kubenswrapper[4167]: E0217 15:00:28.250673 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 15:00:28.466294 master-0 kubenswrapper[4167]: W0217 15:00:28.466194 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:28.466294 master-0 kubenswrapper[4167]: E0217 15:00:28.466269 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:28.636549 master-0 kubenswrapper[4167]: I0217 15:00:28.636128 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:28.868052 master-0 kubenswrapper[4167]: E0217 15:00:28.867957 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:28.982082 master-0 kubenswrapper[4167]: I0217 15:00:28.980788 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 17 15:00:28.982082 master-0 kubenswrapper[4167]: I0217 15:00:28.981559 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 17 15:00:28.982721 master-0 kubenswrapper[4167]: I0217 15:00:28.982600 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" exitCode=1 Feb 17 15:00:28.982721 master-0 kubenswrapper[4167]: I0217 15:00:28.982646 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a"} Feb 17 15:00:28.982721 master-0 kubenswrapper[4167]: I0217 15:00:28.982679 4167 scope.go:117] "RemoveContainer" containerID="6c8e5886b9eb6a295069ba57584ee93950597fa9c820f8edefcae986e6c1a551" Feb 17 15:00:28.982841 master-0 kubenswrapper[4167]: I0217 15:00:28.982819 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:28.984049 master-0 kubenswrapper[4167]: I0217 15:00:28.984003 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:28.984099 master-0 kubenswrapper[4167]: I0217 15:00:28.984058 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:28.984099 master-0 kubenswrapper[4167]: I0217 15:00:28.984075 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:28.984658 master-0 kubenswrapper[4167]: I0217 15:00:28.984536 4167 scope.go:117] "RemoveContainer" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" Feb 17 15:00:28.984763 master-0 kubenswrapper[4167]: E0217 15:00:28.984722 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 17 15:00:28.994057 master-0 kubenswrapper[4167]: E0217 15:00:28.993941 4167 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189510b9293f73f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189510b9293f73f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:00:15.88717669 +0000 UTC m=+8.421841492,LastTimestamp:2026-02-17 15:00:28.984688638 +0000 UTC m=+21.519353450,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:00:29.637419 master-0 kubenswrapper[4167]: I0217 15:00:29.637351 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:29.714068 master-0 kubenswrapper[4167]: I0217 15:00:29.714004 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:29.714947 master-0 kubenswrapper[4167]: I0217 15:00:29.714915 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:29.714947 master-0 kubenswrapper[4167]: I0217 15:00:29.714940 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:29.714947 master-0 kubenswrapper[4167]: I0217 15:00:29.714948 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:29.715061 master-0 kubenswrapper[4167]: I0217 15:00:29.714991 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:29.719881 master-0 kubenswrapper[4167]: E0217 15:00:29.719832 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 17 15:00:29.986765 master-0 kubenswrapper[4167]: I0217 15:00:29.986676 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 17 15:00:30.485422 master-0 kubenswrapper[4167]: I0217 15:00:30.485356 4167 csr.go:261] certificate signing request csr-92llp is approved, waiting to be issued Feb 17 15:00:30.636076 master-0 kubenswrapper[4167]: I0217 15:00:30.635929 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:31.395756 master-0 kubenswrapper[4167]: I0217 15:00:31.395681 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:31.396424 master-0 kubenswrapper[4167]: I0217 15:00:31.396189 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:31.397289 master-0 kubenswrapper[4167]: I0217 15:00:31.397250 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:31.397289 master-0 kubenswrapper[4167]: I0217 15:00:31.397290 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:31.397401 master-0 kubenswrapper[4167]: I0217 15:00:31.397304 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:31.402725 master-0 kubenswrapper[4167]: I0217 15:00:31.402684 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:31.639205 master-0 kubenswrapper[4167]: I0217 15:00:31.638986 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:31.992507 master-0 kubenswrapper[4167]: I0217 15:00:31.992339 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:31.993659 master-0 kubenswrapper[4167]: I0217 15:00:31.993618 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:31.993707 master-0 kubenswrapper[4167]: I0217 15:00:31.993666 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:31.993707 master-0 kubenswrapper[4167]: I0217 15:00:31.993685 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:32.637599 master-0 kubenswrapper[4167]: I0217 15:00:32.637492 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:33.637986 master-0 kubenswrapper[4167]: I0217 15:00:33.637891 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:34.637147 master-0 kubenswrapper[4167]: I0217 15:00:34.637037 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:35.259582 master-0 kubenswrapper[4167]: E0217 15:00:35.259442 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 15:00:35.637270 master-0 kubenswrapper[4167]: I0217 15:00:35.637174 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:36.179861 master-0 kubenswrapper[4167]: I0217 15:00:36.179788 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:36.180106 master-0 kubenswrapper[4167]: I0217 15:00:36.179934 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:36.181776 master-0 kubenswrapper[4167]: I0217 15:00:36.181557 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:36.181776 master-0 kubenswrapper[4167]: I0217 15:00:36.181601 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:36.181776 master-0 kubenswrapper[4167]: I0217 15:00:36.181616 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:36.185641 master-0 kubenswrapper[4167]: I0217 15:00:36.185582 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:00:36.637499 master-0 kubenswrapper[4167]: I0217 15:00:36.637402 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:36.720344 master-0 kubenswrapper[4167]: I0217 15:00:36.720187 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:36.721683 master-0 kubenswrapper[4167]: I0217 15:00:36.721639 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:36.721683 master-0 kubenswrapper[4167]: I0217 15:00:36.721681 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:36.721839 master-0 kubenswrapper[4167]: I0217 15:00:36.721694 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:36.721839 master-0 kubenswrapper[4167]: I0217 15:00:36.721754 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:36.727249 master-0 kubenswrapper[4167]: E0217 15:00:36.727194 4167 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 17 15:00:37.003005 master-0 kubenswrapper[4167]: I0217 15:00:37.002870 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:37.003626 master-0 kubenswrapper[4167]: I0217 15:00:37.003592 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:37.003701 master-0 kubenswrapper[4167]: I0217 15:00:37.003641 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:37.003701 master-0 kubenswrapper[4167]: I0217 15:00:37.003656 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:37.637767 master-0 kubenswrapper[4167]: I0217 15:00:37.637691 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:38.637369 master-0 kubenswrapper[4167]: I0217 15:00:38.637244 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:38.868732 master-0 kubenswrapper[4167]: E0217 15:00:38.868629 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:39.638673 master-0 kubenswrapper[4167]: I0217 15:00:39.638592 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:40.614810 master-0 kubenswrapper[4167]: W0217 15:00:40.614678 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 17 15:00:40.614810 master-0 kubenswrapper[4167]: E0217 15:00:40.614765 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:40.637156 master-0 kubenswrapper[4167]: I0217 15:00:40.637062 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:40.890011 master-0 kubenswrapper[4167]: W0217 15:00:40.889810 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 17 15:00:40.890011 master-0 kubenswrapper[4167]: E0217 15:00:40.889888 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:41.637349 master-0 kubenswrapper[4167]: I0217 15:00:41.637233 4167 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 15:00:42.266813 master-0 kubenswrapper[4167]: E0217 15:00:42.266732 4167 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 15:00:42.362298 master-0 kubenswrapper[4167]: W0217 15:00:42.362248 4167 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 17 15:00:42.362298 master-0 kubenswrapper[4167]: E0217 15:00:42.362307 4167 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 17 15:00:42.376970 master-0 kubenswrapper[4167]: I0217 15:00:42.376891 4167 csr.go:257] certificate signing request csr-92llp is issued Feb 17 15:00:42.430691 master-0 kubenswrapper[4167]: I0217 15:00:42.430617 4167 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 15:00:42.641969 master-0 kubenswrapper[4167]: I0217 15:00:42.641890 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:42.656801 master-0 kubenswrapper[4167]: I0217 15:00:42.656759 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:42.713766 master-0 kubenswrapper[4167]: I0217 15:00:42.713722 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:42.977575 master-0 kubenswrapper[4167]: I0217 15:00:42.977403 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:42.977575 master-0 kubenswrapper[4167]: E0217 15:00:42.977473 4167 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 17 15:00:43.000850 master-0 kubenswrapper[4167]: I0217 15:00:43.000765 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.016595 master-0 kubenswrapper[4167]: I0217 15:00:43.016539 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.075493 master-0 kubenswrapper[4167]: I0217 15:00:43.075410 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.345039 master-0 kubenswrapper[4167]: I0217 15:00:43.344879 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.345039 master-0 kubenswrapper[4167]: E0217 15:00:43.344925 4167 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 17 15:00:43.379132 master-0 kubenswrapper[4167]: I0217 15:00:43.379045 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 10:11:05.849146317 +0000 UTC Feb 17 15:00:43.379132 master-0 kubenswrapper[4167]: I0217 15:00:43.379107 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h10m22.47004796s for next certificate rotation Feb 17 15:00:43.441366 master-0 kubenswrapper[4167]: I0217 15:00:43.441310 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.458833 master-0 kubenswrapper[4167]: I0217 15:00:43.458778 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.516841 master-0 kubenswrapper[4167]: I0217 15:00:43.516787 4167 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 17 15:00:43.728007 master-0 kubenswrapper[4167]: I0217 15:00:43.727908 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:43.729399 master-0 kubenswrapper[4167]: I0217 15:00:43.729354 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:43.729399 master-0 kubenswrapper[4167]: I0217 15:00:43.729395 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:43.729642 master-0 kubenswrapper[4167]: I0217 15:00:43.729411 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:43.729642 master-0 kubenswrapper[4167]: I0217 15:00:43.729502 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:00:43.740086 master-0 kubenswrapper[4167]: I0217 15:00:43.740024 4167 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 17 15:00:43.740086 master-0 kubenswrapper[4167]: E0217 15:00:43.740089 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 17 15:00:43.752152 master-0 kubenswrapper[4167]: E0217 15:00:43.752094 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:43.853270 master-0 kubenswrapper[4167]: E0217 15:00:43.853150 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:43.857601 master-0 kubenswrapper[4167]: I0217 15:00:43.857547 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:43.858723 master-0 kubenswrapper[4167]: I0217 15:00:43.858676 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:43.858723 master-0 kubenswrapper[4167]: I0217 15:00:43.858711 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:43.858723 master-0 kubenswrapper[4167]: I0217 15:00:43.858721 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:43.859060 master-0 kubenswrapper[4167]: I0217 15:00:43.859033 4167 scope.go:117] "RemoveContainer" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" Feb 17 15:00:43.859227 master-0 kubenswrapper[4167]: E0217 15:00:43.859186 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 17 15:00:43.953624 master-0 kubenswrapper[4167]: E0217 15:00:43.953551 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.054810 master-0 kubenswrapper[4167]: E0217 15:00:44.054662 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.155724 master-0 kubenswrapper[4167]: E0217 15:00:44.155641 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.256678 master-0 kubenswrapper[4167]: E0217 15:00:44.256616 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.357678 master-0 kubenswrapper[4167]: E0217 15:00:44.357534 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.458042 master-0 kubenswrapper[4167]: E0217 15:00:44.457930 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.559171 master-0 kubenswrapper[4167]: E0217 15:00:44.559090 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.654768 master-0 kubenswrapper[4167]: I0217 15:00:44.654705 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 15:00:44.660044 master-0 kubenswrapper[4167]: E0217 15:00:44.659998 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.664971 master-0 kubenswrapper[4167]: I0217 15:00:44.664909 4167 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:00:44.760761 master-0 kubenswrapper[4167]: E0217 15:00:44.760691 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.861947 master-0 kubenswrapper[4167]: E0217 15:00:44.861853 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:44.963237 master-0 kubenswrapper[4167]: E0217 15:00:44.963021 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.063469 master-0 kubenswrapper[4167]: E0217 15:00:45.063386 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.163595 master-0 kubenswrapper[4167]: E0217 15:00:45.163529 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.264366 master-0 kubenswrapper[4167]: E0217 15:00:45.264187 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.365406 master-0 kubenswrapper[4167]: E0217 15:00:45.365219 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.466490 master-0 kubenswrapper[4167]: E0217 15:00:45.466360 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.566950 master-0 kubenswrapper[4167]: E0217 15:00:45.566755 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.667511 master-0 kubenswrapper[4167]: E0217 15:00:45.667357 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.768219 master-0 kubenswrapper[4167]: E0217 15:00:45.768106 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.868771 master-0 kubenswrapper[4167]: E0217 15:00:45.868673 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:45.969263 master-0 kubenswrapper[4167]: E0217 15:00:45.969100 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.069784 master-0 kubenswrapper[4167]: E0217 15:00:46.069684 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.170557 master-0 kubenswrapper[4167]: E0217 15:00:46.170282 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.271406 master-0 kubenswrapper[4167]: E0217 15:00:46.271297 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.372023 master-0 kubenswrapper[4167]: E0217 15:00:46.371858 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.472948 master-0 kubenswrapper[4167]: E0217 15:00:46.472709 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.574228 master-0 kubenswrapper[4167]: E0217 15:00:46.574084 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.674793 master-0 kubenswrapper[4167]: E0217 15:00:46.674608 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.775871 master-0 kubenswrapper[4167]: E0217 15:00:46.775631 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.876785 master-0 kubenswrapper[4167]: E0217 15:00:46.876637 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:46.977780 master-0 kubenswrapper[4167]: E0217 15:00:46.977655 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.078637 master-0 kubenswrapper[4167]: E0217 15:00:47.078410 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.179392 master-0 kubenswrapper[4167]: E0217 15:00:47.179251 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.280215 master-0 kubenswrapper[4167]: E0217 15:00:47.280112 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.381252 master-0 kubenswrapper[4167]: E0217 15:00:47.381160 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.482255 master-0 kubenswrapper[4167]: E0217 15:00:47.482149 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.583433 master-0 kubenswrapper[4167]: E0217 15:00:47.583327 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.684742 master-0 kubenswrapper[4167]: E0217 15:00:47.684557 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.785091 master-0 kubenswrapper[4167]: E0217 15:00:47.784965 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.885691 master-0 kubenswrapper[4167]: E0217 15:00:47.885574 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:47.986705 master-0 kubenswrapper[4167]: E0217 15:00:47.986494 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.087900 master-0 kubenswrapper[4167]: E0217 15:00:48.087802 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.192068 master-0 kubenswrapper[4167]: E0217 15:00:48.191975 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.293222 master-0 kubenswrapper[4167]: E0217 15:00:48.293033 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.394057 master-0 kubenswrapper[4167]: E0217 15:00:48.393888 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.494226 master-0 kubenswrapper[4167]: E0217 15:00:48.494058 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.594627 master-0 kubenswrapper[4167]: E0217 15:00:48.594416 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.695577 master-0 kubenswrapper[4167]: E0217 15:00:48.695448 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.797375 master-0 kubenswrapper[4167]: E0217 15:00:48.795714 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.869808 master-0 kubenswrapper[4167]: E0217 15:00:48.869722 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:48.898437 master-0 kubenswrapper[4167]: E0217 15:00:48.898320 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:48.999081 master-0 kubenswrapper[4167]: E0217 15:00:48.998955 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.099733 master-0 kubenswrapper[4167]: E0217 15:00:49.099613 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.200307 master-0 kubenswrapper[4167]: E0217 15:00:49.200133 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.301005 master-0 kubenswrapper[4167]: E0217 15:00:49.300909 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.401597 master-0 kubenswrapper[4167]: E0217 15:00:49.401424 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.502870 master-0 kubenswrapper[4167]: E0217 15:00:49.502595 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.603274 master-0 kubenswrapper[4167]: E0217 15:00:49.603207 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.704124 master-0 kubenswrapper[4167]: E0217 15:00:49.704023 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.804951 master-0 kubenswrapper[4167]: E0217 15:00:49.804790 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:49.905536 master-0 kubenswrapper[4167]: E0217 15:00:49.905425 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.005792 master-0 kubenswrapper[4167]: E0217 15:00:50.005682 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.106085 master-0 kubenswrapper[4167]: E0217 15:00:50.105915 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.206158 master-0 kubenswrapper[4167]: E0217 15:00:50.206050 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.307193 master-0 kubenswrapper[4167]: E0217 15:00:50.307108 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.408235 master-0 kubenswrapper[4167]: E0217 15:00:50.408163 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.509392 master-0 kubenswrapper[4167]: E0217 15:00:50.509276 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.610015 master-0 kubenswrapper[4167]: E0217 15:00:50.609938 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.710836 master-0 kubenswrapper[4167]: E0217 15:00:50.710625 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.811490 master-0 kubenswrapper[4167]: E0217 15:00:50.811388 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:50.912098 master-0 kubenswrapper[4167]: E0217 15:00:50.912020 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.012809 master-0 kubenswrapper[4167]: E0217 15:00:51.012739 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.113275 master-0 kubenswrapper[4167]: E0217 15:00:51.113193 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.214425 master-0 kubenswrapper[4167]: E0217 15:00:51.214348 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.297611 master-0 kubenswrapper[4167]: I0217 15:00:51.297435 4167 csr.go:261] certificate signing request csr-7szw8 is approved, waiting to be issued Feb 17 15:00:51.312434 master-0 kubenswrapper[4167]: I0217 15:00:51.312341 4167 csr.go:257] certificate signing request csr-7szw8 is issued Feb 17 15:00:51.315444 master-0 kubenswrapper[4167]: E0217 15:00:51.315375 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.415907 master-0 kubenswrapper[4167]: E0217 15:00:51.415841 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.516255 master-0 kubenswrapper[4167]: E0217 15:00:51.516130 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.616717 master-0 kubenswrapper[4167]: E0217 15:00:51.616614 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.717177 master-0 kubenswrapper[4167]: E0217 15:00:51.717043 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.817367 master-0 kubenswrapper[4167]: E0217 15:00:51.817254 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:51.918575 master-0 kubenswrapper[4167]: E0217 15:00:51.918362 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.018771 master-0 kubenswrapper[4167]: E0217 15:00:52.018620 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.119679 master-0 kubenswrapper[4167]: E0217 15:00:52.119584 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.220154 master-0 kubenswrapper[4167]: E0217 15:00:52.219987 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.313423 master-0 kubenswrapper[4167]: I0217 15:00:52.313336 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 11:00:27.881965676 +0000 UTC Feb 17 15:00:52.313423 master-0 kubenswrapper[4167]: I0217 15:00:52.313404 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h59m35.568572089s for next certificate rotation Feb 17 15:00:52.320663 master-0 kubenswrapper[4167]: E0217 15:00:52.320590 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.421139 master-0 kubenswrapper[4167]: E0217 15:00:52.420891 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.521833 master-0 kubenswrapper[4167]: E0217 15:00:52.521597 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.622171 master-0 kubenswrapper[4167]: E0217 15:00:52.622089 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.722415 master-0 kubenswrapper[4167]: E0217 15:00:52.722297 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.823688 master-0 kubenswrapper[4167]: E0217 15:00:52.823521 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:52.924598 master-0 kubenswrapper[4167]: E0217 15:00:52.924521 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.025650 master-0 kubenswrapper[4167]: E0217 15:00:53.025562 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.125711 master-0 kubenswrapper[4167]: E0217 15:00:53.125665 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.223257 master-0 kubenswrapper[4167]: I0217 15:00:53.223187 4167 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:00:53.226343 master-0 kubenswrapper[4167]: E0217 15:00:53.226272 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.314606 master-0 kubenswrapper[4167]: I0217 15:00:53.314532 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 08:21:12.262224551 +0000 UTC Feb 17 15:00:53.314606 master-0 kubenswrapper[4167]: I0217 15:00:53.314573 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h20m18.947654635s for next certificate rotation Feb 17 15:00:53.326876 master-0 kubenswrapper[4167]: E0217 15:00:53.326798 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.427189 master-0 kubenswrapper[4167]: E0217 15:00:53.427033 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.527992 master-0 kubenswrapper[4167]: E0217 15:00:53.527919 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.628282 master-0 kubenswrapper[4167]: E0217 15:00:53.628170 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.729359 master-0 kubenswrapper[4167]: E0217 15:00:53.729149 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.830183 master-0 kubenswrapper[4167]: E0217 15:00:53.830094 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:53.930926 master-0 kubenswrapper[4167]: E0217 15:00:53.930836 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.000348 master-0 kubenswrapper[4167]: E0217 15:00:54.000164 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 17 15:00:54.031202 master-0 kubenswrapper[4167]: E0217 15:00:54.031155 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.132273 master-0 kubenswrapper[4167]: E0217 15:00:54.132196 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.233057 master-0 kubenswrapper[4167]: E0217 15:00:54.232999 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.334020 master-0 kubenswrapper[4167]: E0217 15:00:54.333765 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.434079 master-0 kubenswrapper[4167]: E0217 15:00:54.433977 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.534981 master-0 kubenswrapper[4167]: E0217 15:00:54.534890 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.635481 master-0 kubenswrapper[4167]: E0217 15:00:54.635392 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.735806 master-0 kubenswrapper[4167]: E0217 15:00:54.735677 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.836592 master-0 kubenswrapper[4167]: E0217 15:00:54.836518 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:54.858617 master-0 kubenswrapper[4167]: I0217 15:00:54.857948 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:54.859076 master-0 kubenswrapper[4167]: I0217 15:00:54.859051 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:54.859590 master-0 kubenswrapper[4167]: I0217 15:00:54.859111 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:54.859590 master-0 kubenswrapper[4167]: I0217 15:00:54.859273 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:54.859831 master-0 kubenswrapper[4167]: I0217 15:00:54.859801 4167 scope.go:117] "RemoveContainer" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" Feb 17 15:00:54.937122 master-0 kubenswrapper[4167]: E0217 15:00:54.936949 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.037733 master-0 kubenswrapper[4167]: E0217 15:00:55.037671 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.084588 master-0 kubenswrapper[4167]: I0217 15:00:55.084534 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 17 15:00:55.085252 master-0 kubenswrapper[4167]: I0217 15:00:55.085202 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"518b836a67d98b0cf5a2e8d843574e61038c30a6058fcd6123417dc9c4975d78"} Feb 17 15:00:55.138708 master-0 kubenswrapper[4167]: E0217 15:00:55.138572 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.239632 master-0 kubenswrapper[4167]: E0217 15:00:55.239394 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.340445 master-0 kubenswrapper[4167]: E0217 15:00:55.340317 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.441103 master-0 kubenswrapper[4167]: E0217 15:00:55.440994 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.541985 master-0 kubenswrapper[4167]: E0217 15:00:55.541802 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.642240 master-0 kubenswrapper[4167]: E0217 15:00:55.642162 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.743307 master-0 kubenswrapper[4167]: E0217 15:00:55.743076 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.844626 master-0 kubenswrapper[4167]: E0217 15:00:55.844349 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:55.945428 master-0 kubenswrapper[4167]: E0217 15:00:55.944891 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.046091 master-0 kubenswrapper[4167]: E0217 15:00:56.045989 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.087448 master-0 kubenswrapper[4167]: I0217 15:00:56.087356 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:00:56.088396 master-0 kubenswrapper[4167]: I0217 15:00:56.088340 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:00:56.088396 master-0 kubenswrapper[4167]: I0217 15:00:56.088366 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:00:56.088396 master-0 kubenswrapper[4167]: I0217 15:00:56.088377 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:00:56.147261 master-0 kubenswrapper[4167]: E0217 15:00:56.147150 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.248052 master-0 kubenswrapper[4167]: E0217 15:00:56.247979 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.349158 master-0 kubenswrapper[4167]: E0217 15:00:56.349091 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.449947 master-0 kubenswrapper[4167]: E0217 15:00:56.449742 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.550625 master-0 kubenswrapper[4167]: E0217 15:00:56.550564 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.651256 master-0 kubenswrapper[4167]: E0217 15:00:56.651132 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.751851 master-0 kubenswrapper[4167]: E0217 15:00:56.751603 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.852677 master-0 kubenswrapper[4167]: E0217 15:00:56.852558 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:56.953525 master-0 kubenswrapper[4167]: E0217 15:00:56.953367 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.053859 master-0 kubenswrapper[4167]: E0217 15:00:57.053630 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.154606 master-0 kubenswrapper[4167]: E0217 15:00:57.154450 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.255701 master-0 kubenswrapper[4167]: E0217 15:00:57.255286 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.357010 master-0 kubenswrapper[4167]: E0217 15:00:57.356795 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.457921 master-0 kubenswrapper[4167]: E0217 15:00:57.457806 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.558331 master-0 kubenswrapper[4167]: E0217 15:00:57.558221 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.659073 master-0 kubenswrapper[4167]: E0217 15:00:57.658955 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.759648 master-0 kubenswrapper[4167]: E0217 15:00:57.759540 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.860355 master-0 kubenswrapper[4167]: E0217 15:00:57.860230 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:57.961545 master-0 kubenswrapper[4167]: E0217 15:00:57.961325 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.061682 master-0 kubenswrapper[4167]: E0217 15:00:58.061613 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.162405 master-0 kubenswrapper[4167]: E0217 15:00:58.162260 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.263118 master-0 kubenswrapper[4167]: E0217 15:00:58.262872 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.363312 master-0 kubenswrapper[4167]: E0217 15:00:58.363148 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.464536 master-0 kubenswrapper[4167]: E0217 15:00:58.464332 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.565451 master-0 kubenswrapper[4167]: E0217 15:00:58.565223 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.666328 master-0 kubenswrapper[4167]: E0217 15:00:58.666225 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.766621 master-0 kubenswrapper[4167]: E0217 15:00:58.766509 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.866740 master-0 kubenswrapper[4167]: E0217 15:00:58.866681 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:58.870701 master-0 kubenswrapper[4167]: E0217 15:00:58.870667 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:00:58.967301 master-0 kubenswrapper[4167]: E0217 15:00:58.967175 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.067903 master-0 kubenswrapper[4167]: E0217 15:00:59.067813 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.168336 master-0 kubenswrapper[4167]: E0217 15:00:59.168137 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.269302 master-0 kubenswrapper[4167]: E0217 15:00:59.269208 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.370386 master-0 kubenswrapper[4167]: E0217 15:00:59.370294 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.471519 master-0 kubenswrapper[4167]: E0217 15:00:59.471294 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.571588 master-0 kubenswrapper[4167]: E0217 15:00:59.571443 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.672832 master-0 kubenswrapper[4167]: E0217 15:00:59.672671 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.774075 master-0 kubenswrapper[4167]: E0217 15:00:59.773864 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.874313 master-0 kubenswrapper[4167]: E0217 15:00:59.874147 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:00:59.974427 master-0 kubenswrapper[4167]: E0217 15:00:59.974305 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.075616 master-0 kubenswrapper[4167]: E0217 15:01:00.075399 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.176728 master-0 kubenswrapper[4167]: E0217 15:01:00.176543 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.277035 master-0 kubenswrapper[4167]: E0217 15:01:00.276886 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.377323 master-0 kubenswrapper[4167]: E0217 15:01:00.377137 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.478017 master-0 kubenswrapper[4167]: E0217 15:01:00.477862 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.579210 master-0 kubenswrapper[4167]: E0217 15:01:00.579106 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.679881 master-0 kubenswrapper[4167]: E0217 15:01:00.679696 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.781025 master-0 kubenswrapper[4167]: E0217 15:01:00.780861 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.881324 master-0 kubenswrapper[4167]: E0217 15:01:00.881201 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:00.981823 master-0 kubenswrapper[4167]: E0217 15:01:00.981575 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.082054 master-0 kubenswrapper[4167]: E0217 15:01:01.081911 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.182318 master-0 kubenswrapper[4167]: E0217 15:01:01.182181 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.283077 master-0 kubenswrapper[4167]: E0217 15:01:01.282917 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.383913 master-0 kubenswrapper[4167]: E0217 15:01:01.383823 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.485141 master-0 kubenswrapper[4167]: E0217 15:01:01.485060 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.586351 master-0 kubenswrapper[4167]: E0217 15:01:01.586128 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.687213 master-0 kubenswrapper[4167]: E0217 15:01:01.687107 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.787337 master-0 kubenswrapper[4167]: E0217 15:01:01.787233 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.887613 master-0 kubenswrapper[4167]: E0217 15:01:01.887396 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:01.988118 master-0 kubenswrapper[4167]: E0217 15:01:01.987971 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.088999 master-0 kubenswrapper[4167]: E0217 15:01:02.088880 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.189276 master-0 kubenswrapper[4167]: E0217 15:01:02.189073 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.290006 master-0 kubenswrapper[4167]: E0217 15:01:02.289910 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.391121 master-0 kubenswrapper[4167]: E0217 15:01:02.391048 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.491544 master-0 kubenswrapper[4167]: E0217 15:01:02.491353 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.592130 master-0 kubenswrapper[4167]: E0217 15:01:02.592021 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.692878 master-0 kubenswrapper[4167]: E0217 15:01:02.692714 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.793758 master-0 kubenswrapper[4167]: E0217 15:01:02.793612 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.894357 master-0 kubenswrapper[4167]: E0217 15:01:02.894243 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:02.995117 master-0 kubenswrapper[4167]: E0217 15:01:02.995012 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.096115 master-0 kubenswrapper[4167]: E0217 15:01:03.095913 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.197220 master-0 kubenswrapper[4167]: E0217 15:01:03.197128 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.297781 master-0 kubenswrapper[4167]: E0217 15:01:03.297667 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.398977 master-0 kubenswrapper[4167]: E0217 15:01:03.398869 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.499449 master-0 kubenswrapper[4167]: E0217 15:01:03.499327 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.600260 master-0 kubenswrapper[4167]: E0217 15:01:03.600133 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.701084 master-0 kubenswrapper[4167]: E0217 15:01:03.700743 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.801934 master-0 kubenswrapper[4167]: E0217 15:01:03.801785 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:03.902690 master-0 kubenswrapper[4167]: E0217 15:01:03.902564 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.003499 master-0 kubenswrapper[4167]: E0217 15:01:04.003279 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.104273 master-0 kubenswrapper[4167]: E0217 15:01:04.104170 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.205288 master-0 kubenswrapper[4167]: E0217 15:01:04.205187 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.306404 master-0 kubenswrapper[4167]: E0217 15:01:04.306193 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.355799 master-0 kubenswrapper[4167]: E0217 15:01:04.355674 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 17 15:01:04.407531 master-0 kubenswrapper[4167]: E0217 15:01:04.407344 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.508736 master-0 kubenswrapper[4167]: E0217 15:01:04.508637 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.609064 master-0 kubenswrapper[4167]: E0217 15:01:04.608826 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.709066 master-0 kubenswrapper[4167]: E0217 15:01:04.709004 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.810058 master-0 kubenswrapper[4167]: E0217 15:01:04.809939 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:04.911033 master-0 kubenswrapper[4167]: E0217 15:01:04.910954 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.011559 master-0 kubenswrapper[4167]: E0217 15:01:05.011489 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.112546 master-0 kubenswrapper[4167]: E0217 15:01:05.112437 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.213726 master-0 kubenswrapper[4167]: E0217 15:01:05.213577 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.314664 master-0 kubenswrapper[4167]: E0217 15:01:05.314571 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.415828 master-0 kubenswrapper[4167]: E0217 15:01:05.415714 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.516139 master-0 kubenswrapper[4167]: E0217 15:01:05.515947 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.616336 master-0 kubenswrapper[4167]: E0217 15:01:05.616262 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.716730 master-0 kubenswrapper[4167]: E0217 15:01:05.716631 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.817769 master-0 kubenswrapper[4167]: E0217 15:01:05.817603 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:05.918642 master-0 kubenswrapper[4167]: E0217 15:01:05.918593 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.019241 master-0 kubenswrapper[4167]: E0217 15:01:06.019176 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.120292 master-0 kubenswrapper[4167]: E0217 15:01:06.120224 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.221307 master-0 kubenswrapper[4167]: E0217 15:01:06.221220 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.322349 master-0 kubenswrapper[4167]: E0217 15:01:06.322257 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.422700 master-0 kubenswrapper[4167]: E0217 15:01:06.422413 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.523132 master-0 kubenswrapper[4167]: E0217 15:01:06.523048 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.623633 master-0 kubenswrapper[4167]: E0217 15:01:06.623414 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.723761 master-0 kubenswrapper[4167]: E0217 15:01:06.723567 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.824514 master-0 kubenswrapper[4167]: E0217 15:01:06.824392 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:06.925668 master-0 kubenswrapper[4167]: E0217 15:01:06.925535 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.026754 master-0 kubenswrapper[4167]: E0217 15:01:07.026562 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.127683 master-0 kubenswrapper[4167]: E0217 15:01:07.127559 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.228518 master-0 kubenswrapper[4167]: E0217 15:01:07.228367 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.328930 master-0 kubenswrapper[4167]: E0217 15:01:07.328762 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.429659 master-0 kubenswrapper[4167]: E0217 15:01:07.429545 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.529802 master-0 kubenswrapper[4167]: E0217 15:01:07.529681 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.579134 master-0 kubenswrapper[4167]: I0217 15:01:07.578937 4167 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:01:07.630320 master-0 kubenswrapper[4167]: E0217 15:01:07.630232 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.730823 master-0 kubenswrapper[4167]: E0217 15:01:07.730682 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.831982 master-0 kubenswrapper[4167]: E0217 15:01:07.831783 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:07.932604 master-0 kubenswrapper[4167]: E0217 15:01:07.932447 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.033187 master-0 kubenswrapper[4167]: E0217 15:01:08.033051 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.134281 master-0 kubenswrapper[4167]: E0217 15:01:08.134127 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.234379 master-0 kubenswrapper[4167]: E0217 15:01:08.234230 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.335196 master-0 kubenswrapper[4167]: E0217 15:01:08.335039 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.435405 master-0 kubenswrapper[4167]: E0217 15:01:08.435200 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.535595 master-0 kubenswrapper[4167]: E0217 15:01:08.535436 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.635858 master-0 kubenswrapper[4167]: E0217 15:01:08.635764 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.737103 master-0 kubenswrapper[4167]: E0217 15:01:08.736879 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.837947 master-0 kubenswrapper[4167]: E0217 15:01:08.837843 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:08.870902 master-0 kubenswrapper[4167]: E0217 15:01:08.870786 4167 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 17 15:01:08.938321 master-0 kubenswrapper[4167]: E0217 15:01:08.938241 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.039093 master-0 kubenswrapper[4167]: E0217 15:01:09.038917 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.139351 master-0 kubenswrapper[4167]: E0217 15:01:09.139236 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.240211 master-0 kubenswrapper[4167]: E0217 15:01:09.240081 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.340838 master-0 kubenswrapper[4167]: E0217 15:01:09.340647 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.441126 master-0 kubenswrapper[4167]: E0217 15:01:09.441022 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.541319 master-0 kubenswrapper[4167]: E0217 15:01:09.541180 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.642352 master-0 kubenswrapper[4167]: E0217 15:01:09.642243 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.743404 master-0 kubenswrapper[4167]: E0217 15:01:09.743321 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.844368 master-0 kubenswrapper[4167]: E0217 15:01:09.844303 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:09.945521 master-0 kubenswrapper[4167]: E0217 15:01:09.945376 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.046402 master-0 kubenswrapper[4167]: E0217 15:01:10.046323 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.146956 master-0 kubenswrapper[4167]: E0217 15:01:10.146853 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.247164 master-0 kubenswrapper[4167]: E0217 15:01:10.247019 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.347815 master-0 kubenswrapper[4167]: E0217 15:01:10.347708 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.448958 master-0 kubenswrapper[4167]: E0217 15:01:10.448901 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.550180 master-0 kubenswrapper[4167]: E0217 15:01:10.550037 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.651212 master-0 kubenswrapper[4167]: E0217 15:01:10.651157 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.752417 master-0 kubenswrapper[4167]: E0217 15:01:10.752302 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.853543 master-0 kubenswrapper[4167]: E0217 15:01:10.853344 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:10.954446 master-0 kubenswrapper[4167]: E0217 15:01:10.954341 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.054922 master-0 kubenswrapper[4167]: E0217 15:01:11.054811 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.155197 master-0 kubenswrapper[4167]: E0217 15:01:11.155043 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.256239 master-0 kubenswrapper[4167]: E0217 15:01:11.256127 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.356702 master-0 kubenswrapper[4167]: E0217 15:01:11.356574 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.457205 master-0 kubenswrapper[4167]: E0217 15:01:11.456986 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.557879 master-0 kubenswrapper[4167]: E0217 15:01:11.557761 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.658695 master-0 kubenswrapper[4167]: E0217 15:01:11.658608 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.759389 master-0 kubenswrapper[4167]: E0217 15:01:11.759161 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.860399 master-0 kubenswrapper[4167]: E0217 15:01:11.860260 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.961769 master-0 kubenswrapper[4167]: E0217 15:01:11.961687 4167 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 17 15:01:11.975579 master-0 kubenswrapper[4167]: I0217 15:01:11.975500 4167 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:01:12.648337 master-0 kubenswrapper[4167]: I0217 15:01:12.648250 4167 apiserver.go:52] "Watching apiserver" Feb 17 15:01:12.653065 master-0 kubenswrapper[4167]: I0217 15:01:12.653006 4167 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:01:12.653302 master-0 kubenswrapper[4167]: I0217 15:01:12.653239 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-5fwlz","openshift-cluster-version/cluster-version-operator-76959b6567-v49tq","openshift-network-operator/network-operator-6fcf4c966-l24cg"] Feb 17 15:01:12.653697 master-0 kubenswrapper[4167]: I0217 15:01:12.653654 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.653762 master-0 kubenswrapper[4167]: I0217 15:01:12.653703 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.654122 master-0 kubenswrapper[4167]: I0217 15:01:12.653723 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.657603 master-0 kubenswrapper[4167]: I0217 15:01:12.657552 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 17 15:01:12.657729 master-0 kubenswrapper[4167]: I0217 15:01:12.657666 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 17 15:01:12.658837 master-0 kubenswrapper[4167]: I0217 15:01:12.658789 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:01:12.658837 master-0 kubenswrapper[4167]: I0217 15:01:12.658818 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:01:12.660372 master-0 kubenswrapper[4167]: I0217 15:01:12.660330 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 17 15:01:12.660800 master-0 kubenswrapper[4167]: I0217 15:01:12.660763 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:01:12.661010 master-0 kubenswrapper[4167]: I0217 15:01:12.660977 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:01:12.661066 master-0 kubenswrapper[4167]: I0217 15:01:12.661004 4167 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 17 15:01:12.661111 master-0 kubenswrapper[4167]: I0217 15:01:12.661078 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:01:12.661338 master-0 kubenswrapper[4167]: I0217 15:01:12.661255 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:01:12.734650 master-0 kubenswrapper[4167]: I0217 15:01:12.734589 4167 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 17 15:01:12.809124 master-0 kubenswrapper[4167]: I0217 15:01:12.809002 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.809124 master-0 kubenswrapper[4167]: I0217 15:01:12.809082 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.809124 master-0 kubenswrapper[4167]: I0217 15:01:12.809124 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809229 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809296 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809325 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809349 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809371 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809391 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809514 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.809657 master-0 kubenswrapper[4167]: I0217 15:01:12.809560 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.810171 master-0 kubenswrapper[4167]: I0217 15:01:12.809678 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.810171 master-0 kubenswrapper[4167]: I0217 15:01:12.809756 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txj2z\" (UniqueName: \"kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911119 master-0 kubenswrapper[4167]: I0217 15:01:12.910860 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911119 master-0 kubenswrapper[4167]: I0217 15:01:12.910931 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txj2z\" (UniqueName: \"kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911119 master-0 kubenswrapper[4167]: I0217 15:01:12.910975 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.911119 master-0 kubenswrapper[4167]: I0217 15:01:12.911055 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911119 master-0 kubenswrapper[4167]: I0217 15:01:12.911112 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911282 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911488 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911529 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911553 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911563 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: I0217 15:01:12.911624 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.911695 master-0 kubenswrapper[4167]: E0217 15:01:12.911682 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:12.912488 master-0 kubenswrapper[4167]: E0217 15:01:12.912393 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:13.412353603 +0000 UTC m=+65.947018445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:12.912488 master-0 kubenswrapper[4167]: I0217 15:01:12.911771 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.912642 master-0 kubenswrapper[4167]: I0217 15:01:12.911822 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.912642 master-0 kubenswrapper[4167]: I0217 15:01:12.911835 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.912760 master-0 kubenswrapper[4167]: I0217 15:01:12.912638 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.912760 master-0 kubenswrapper[4167]: I0217 15:01:12.911739 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.912760 master-0 kubenswrapper[4167]: I0217 15:01:12.912679 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.912933 master-0 kubenswrapper[4167]: I0217 15:01:12.912900 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.912997 master-0 kubenswrapper[4167]: I0217 15:01:12.912944 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.913257 master-0 kubenswrapper[4167]: I0217 15:01:12.913191 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.914268 master-0 kubenswrapper[4167]: I0217 15:01:12.914213 4167 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:01:12.915422 master-0 kubenswrapper[4167]: I0217 15:01:12.915360 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.922914 master-0 kubenswrapper[4167]: I0217 15:01:12.922858 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.937590 master-0 kubenswrapper[4167]: I0217 15:01:12.937519 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.938109 master-0 kubenswrapper[4167]: I0217 15:01:12.937902 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:12.939729 master-0 kubenswrapper[4167]: I0217 15:01:12.939656 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txj2z\" (UniqueName: \"kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z\") pod \"assisted-installer-controller-5fwlz\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:12.982624 master-0 kubenswrapper[4167]: I0217 15:01:12.982518 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:01:12.995018 master-0 kubenswrapper[4167]: W0217 15:01:12.994934 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd2c79d_1e10_4f09_8a33_c66598abc99a.slice/crio-4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144 WatchSource:0}: Error finding container 4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144: Status 404 returned error can't find the container with id 4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144 Feb 17 15:01:13.016977 master-0 kubenswrapper[4167]: I0217 15:01:13.016879 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:13.032355 master-0 kubenswrapper[4167]: W0217 15:01:13.032275 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0dcd0f_f7e6_4d6d_bd6a_aff7ff1f8f4a.slice/crio-e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817 WatchSource:0}: Error finding container e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817: Status 404 returned error can't find the container with id e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817 Feb 17 15:01:13.130018 master-0 kubenswrapper[4167]: I0217 15:01:13.129907 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-5fwlz" event={"ID":"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a","Type":"ContainerStarted","Data":"e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817"} Feb 17 15:01:13.130948 master-0 kubenswrapper[4167]: I0217 15:01:13.130875 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144"} Feb 17 15:01:13.416089 master-0 kubenswrapper[4167]: I0217 15:01:13.415986 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:13.416543 master-0 kubenswrapper[4167]: E0217 15:01:13.416186 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:13.416543 master-0 kubenswrapper[4167]: E0217 15:01:13.416305 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:14.416274872 +0000 UTC m=+66.950939704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:14.424592 master-0 kubenswrapper[4167]: I0217 15:01:14.424536 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:14.425202 master-0 kubenswrapper[4167]: E0217 15:01:14.424754 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:14.425202 master-0 kubenswrapper[4167]: E0217 15:01:14.424907 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:16.424876367 +0000 UTC m=+68.959541199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:16.436410 master-0 kubenswrapper[4167]: I0217 15:01:16.436312 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:16.437015 master-0 kubenswrapper[4167]: E0217 15:01:16.436581 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:16.437015 master-0 kubenswrapper[4167]: E0217 15:01:16.436720 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:20.436689197 +0000 UTC m=+72.971354019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:18.573160 master-0 kubenswrapper[4167]: I0217 15:01:18.573044 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"10d84ccff2961ae0ad3f02bc199d5d344c04cfb73f881e75241a2774459f1897"} Feb 17 15:01:20.477597 master-0 kubenswrapper[4167]: I0217 15:01:20.477524 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:20.493993 master-0 kubenswrapper[4167]: E0217 15:01:20.477719 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:20.493993 master-0 kubenswrapper[4167]: E0217 15:01:20.477812 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:28.477782816 +0000 UTC m=+81.012447658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:20.584412 master-0 kubenswrapper[4167]: I0217 15:01:20.584299 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-5fwlz" event={"ID":"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a","Type":"ContainerStarted","Data":"b19e391b0150ed3b7b034d7cfb9dec3399203df0724feccc18bf70218b47fb07"} Feb 17 15:01:20.599044 master-0 kubenswrapper[4167]: I0217 15:01:20.598649 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" podStartSLOduration=31.481371363 podStartE2EDuration="35.598613494s" podCreationTimestamp="2026-02-17 15:00:45 +0000 UTC" firstStartedPulling="2026-02-17 15:01:12.99826942 +0000 UTC m=+65.532934212" lastFinishedPulling="2026-02-17 15:01:17.115511541 +0000 UTC m=+69.650176343" observedRunningTime="2026-02-17 15:01:18.640893711 +0000 UTC m=+71.175558523" watchObservedRunningTime="2026-02-17 15:01:20.598613494 +0000 UTC m=+73.133278336" Feb 17 15:01:20.600024 master-0 kubenswrapper[4167]: I0217 15:01:20.599383 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-wlg8w"] Feb 17 15:01:20.602306 master-0 kubenswrapper[4167]: I0217 15:01:20.602225 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:20.618000 master-0 kubenswrapper[4167]: I0217 15:01:20.617887 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-5fwlz" podStartSLOduration=337.23971408 podStartE2EDuration="5m44.617856313s" podCreationTimestamp="2026-02-17 14:55:36 +0000 UTC" firstStartedPulling="2026-02-17 15:01:13.034830867 +0000 UTC m=+65.569495699" lastFinishedPulling="2026-02-17 15:01:20.41297309 +0000 UTC m=+72.947637932" observedRunningTime="2026-02-17 15:01:20.604053476 +0000 UTC m=+73.138718298" watchObservedRunningTime="2026-02-17 15:01:20.617856313 +0000 UTC m=+73.152521155" Feb 17 15:01:20.779052 master-0 kubenswrapper[4167]: I0217 15:01:20.778943 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9lk8\" (UniqueName: \"kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8\") pod \"mtu-prober-wlg8w\" (UID: \"edb8b6b9-b814-4451-98bb-dec174fbf936\") " pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:20.879990 master-0 kubenswrapper[4167]: I0217 15:01:20.879874 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9lk8\" (UniqueName: \"kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8\") pod \"mtu-prober-wlg8w\" (UID: \"edb8b6b9-b814-4451-98bb-dec174fbf936\") " pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:20.912241 master-0 kubenswrapper[4167]: I0217 15:01:20.912084 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9lk8\" (UniqueName: \"kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8\") pod \"mtu-prober-wlg8w\" (UID: \"edb8b6b9-b814-4451-98bb-dec174fbf936\") " pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:20.995731 master-0 kubenswrapper[4167]: I0217 15:01:20.995438 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:21.007003 master-0 kubenswrapper[4167]: W0217 15:01:21.006937 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb8b6b9_b814_4451_98bb_dec174fbf936.slice/crio-44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0 WatchSource:0}: Error finding container 44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0: Status 404 returned error can't find the container with id 44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0 Feb 17 15:01:21.590261 master-0 kubenswrapper[4167]: I0217 15:01:21.589832 4167 generic.go:334] "Generic (PLEG): container finished" podID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerID="b19e391b0150ed3b7b034d7cfb9dec3399203df0724feccc18bf70218b47fb07" exitCode=0 Feb 17 15:01:21.590261 master-0 kubenswrapper[4167]: I0217 15:01:21.589892 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-5fwlz" event={"ID":"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a","Type":"ContainerDied","Data":"b19e391b0150ed3b7b034d7cfb9dec3399203df0724feccc18bf70218b47fb07"} Feb 17 15:01:21.593171 master-0 kubenswrapper[4167]: I0217 15:01:21.593081 4167 generic.go:334] "Generic (PLEG): container finished" podID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerID="b13d746fb33147c34bbdc9c278d3605b58fe9a5ed8f1e19a36f86fe284caa4b2" exitCode=0 Feb 17 15:01:21.593293 master-0 kubenswrapper[4167]: I0217 15:01:21.593184 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-wlg8w" event={"ID":"edb8b6b9-b814-4451-98bb-dec174fbf936","Type":"ContainerDied","Data":"b13d746fb33147c34bbdc9c278d3605b58fe9a5ed8f1e19a36f86fe284caa4b2"} Feb 17 15:01:21.593342 master-0 kubenswrapper[4167]: I0217 15:01:21.593289 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-wlg8w" event={"ID":"edb8b6b9-b814-4451-98bb-dec174fbf936","Type":"ContainerStarted","Data":"44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0"} Feb 17 15:01:22.615648 master-0 kubenswrapper[4167]: I0217 15:01:22.615598 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:22.619773 master-0 kubenswrapper[4167]: I0217 15:01:22.619710 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:22.796345 master-0 kubenswrapper[4167]: I0217 15:01:22.796287 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9lk8\" (UniqueName: \"kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8\") pod \"edb8b6b9-b814-4451-98bb-dec174fbf936\" (UID: \"edb8b6b9-b814-4451-98bb-dec174fbf936\") " Feb 17 15:01:22.796345 master-0 kubenswrapper[4167]: I0217 15:01:22.796332 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf\") pod \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796361 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txj2z\" (UniqueName: \"kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z\") pod \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796382 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle\") pod \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796401 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf\") pod \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796420 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files\") pod \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\" (UID: \"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a\") " Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796534 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" (UID: "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796573 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" (UID: "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796582 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" (UID: "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:01:22.796652 master-0 kubenswrapper[4167]: I0217 15:01:22.796621 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" (UID: "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:01:22.799628 master-0 kubenswrapper[4167]: I0217 15:01:22.799559 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8" (OuterVolumeSpecName: "kube-api-access-d9lk8") pod "edb8b6b9-b814-4451-98bb-dec174fbf936" (UID: "edb8b6b9-b814-4451-98bb-dec174fbf936"). InnerVolumeSpecName "kube-api-access-d9lk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:01:22.799700 master-0 kubenswrapper[4167]: I0217 15:01:22.799616 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z" (OuterVolumeSpecName: "kube-api-access-txj2z") pod "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" (UID: "0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a"). InnerVolumeSpecName "kube-api-access-txj2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:01:22.897053 master-0 kubenswrapper[4167]: I0217 15:01:22.896967 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txj2z\" (UniqueName: \"kubernetes.io/projected/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-kube-api-access-txj2z\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:22.897053 master-0 kubenswrapper[4167]: I0217 15:01:22.897028 4167 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:22.897053 master-0 kubenswrapper[4167]: I0217 15:01:22.897041 4167 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:22.897053 master-0 kubenswrapper[4167]: I0217 15:01:22.897054 4167 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:22.897280 master-0 kubenswrapper[4167]: I0217 15:01:22.897068 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9lk8\" (UniqueName: \"kubernetes.io/projected/edb8b6b9-b814-4451-98bb-dec174fbf936-kube-api-access-d9lk8\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:22.897280 master-0 kubenswrapper[4167]: I0217 15:01:22.897081 4167 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:01:23.599249 master-0 kubenswrapper[4167]: I0217 15:01:23.599173 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-5fwlz" event={"ID":"0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a","Type":"ContainerDied","Data":"e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817"} Feb 17 15:01:23.599249 master-0 kubenswrapper[4167]: I0217 15:01:23.599227 4167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817" Feb 17 15:01:23.599575 master-0 kubenswrapper[4167]: I0217 15:01:23.599258 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:01:23.600756 master-0 kubenswrapper[4167]: I0217 15:01:23.600719 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-wlg8w" event={"ID":"edb8b6b9-b814-4451-98bb-dec174fbf936","Type":"ContainerDied","Data":"44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0"} Feb 17 15:01:23.600975 master-0 kubenswrapper[4167]: I0217 15:01:23.600763 4167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0" Feb 17 15:01:23.600975 master-0 kubenswrapper[4167]: I0217 15:01:23.600788 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-wlg8w" Feb 17 15:01:25.754294 master-0 kubenswrapper[4167]: I0217 15:01:25.754194 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-wlg8w"] Feb 17 15:01:25.780223 master-0 kubenswrapper[4167]: I0217 15:01:25.780117 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-wlg8w"] Feb 17 15:01:26.211005 master-0 kubenswrapper[4167]: I0217 15:01:26.210922 4167 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:01:26.862364 master-0 kubenswrapper[4167]: I0217 15:01:26.862274 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" path="/var/lib/kubelet/pods/edb8b6b9-b814-4451-98bb-dec174fbf936/volumes" Feb 17 15:01:27.148389 master-0 kubenswrapper[4167]: I0217 15:01:27.148242 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 17 15:01:28.539359 master-0 kubenswrapper[4167]: I0217 15:01:28.539265 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:28.539949 master-0 kubenswrapper[4167]: E0217 15:01:28.539474 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:28.539949 master-0 kubenswrapper[4167]: E0217 15:01:28.539549 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:44.539528632 +0000 UTC m=+97.074193444 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:30.848949 master-0 kubenswrapper[4167]: I0217 15:01:30.848838 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=4.848813144 podStartE2EDuration="4.848813144s" podCreationTimestamp="2026-02-17 15:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:01:28.873247683 +0000 UTC m=+81.407912525" watchObservedRunningTime="2026-02-17 15:01:30.848813144 +0000 UTC m=+83.383477946" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849197 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9r5rl"] Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: E0217 15:01:30.849267 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849280 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: E0217 15:01:30.849289 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849296 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849344 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849356 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:01:30.849787 master-0 kubenswrapper[4167]: I0217 15:01:30.849557 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.858423 master-0 kubenswrapper[4167]: I0217 15:01:30.858367 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:01:30.859099 master-0 kubenswrapper[4167]: I0217 15:01:30.858738 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:01:30.860821 master-0 kubenswrapper[4167]: I0217 15:01:30.859121 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:01:30.860821 master-0 kubenswrapper[4167]: I0217 15:01:30.859285 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960658 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960716 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960735 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960752 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960769 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960785 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960828 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960844 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.960859 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.961064 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.960830 master-0 kubenswrapper[4167]: I0217 15:01:30.961081 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961096 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961113 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961129 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961145 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961161 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:30.962249 master-0 kubenswrapper[4167]: I0217 15:01:30.961174 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.041393 master-0 kubenswrapper[4167]: I0217 15:01:31.041296 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9nv95"] Feb 17 15:01:31.042188 master-0 kubenswrapper[4167]: I0217 15:01:31.042123 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.046853 master-0 kubenswrapper[4167]: I0217 15:01:31.046734 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 17 15:01:31.047215 master-0 kubenswrapper[4167]: I0217 15:01:31.047049 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:01:31.061799 master-0 kubenswrapper[4167]: I0217 15:01:31.061736 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061799 master-0 kubenswrapper[4167]: I0217 15:01:31.061789 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061867 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061875 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061890 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061938 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061960 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061979 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.061977 master-0 kubenswrapper[4167]: I0217 15:01:31.061986 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062035 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062057 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062061 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062096 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062113 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062137 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062156 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062177 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062196 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062215 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062243 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062258 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062275 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.062410 master-0 kubenswrapper[4167]: I0217 15:01:31.062305 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.062613 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.062669 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.061935 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.062774 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.062853 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063005 master-0 kubenswrapper[4167]: I0217 15:01:31.062904 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063285 master-0 kubenswrapper[4167]: I0217 15:01:31.063245 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.063285 master-0 kubenswrapper[4167]: I0217 15:01:31.063259 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.094375 master-0 kubenswrapper[4167]: I0217 15:01:31.094299 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.162698 master-0 kubenswrapper[4167]: I0217 15:01:31.162518 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.162698 master-0 kubenswrapper[4167]: I0217 15:01:31.162583 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.162698 master-0 kubenswrapper[4167]: I0217 15:01:31.162645 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.162698 master-0 kubenswrapper[4167]: I0217 15:01:31.162700 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.162698 master-0 kubenswrapper[4167]: I0217 15:01:31.162732 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.163283 master-0 kubenswrapper[4167]: I0217 15:01:31.162765 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.163283 master-0 kubenswrapper[4167]: I0217 15:01:31.162869 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.163283 master-0 kubenswrapper[4167]: I0217 15:01:31.162964 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.168728 master-0 kubenswrapper[4167]: I0217 15:01:31.168661 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9r5rl" Feb 17 15:01:31.263553 master-0 kubenswrapper[4167]: I0217 15:01:31.263413 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263553 master-0 kubenswrapper[4167]: I0217 15:01:31.263562 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263590 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263616 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263617 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263639 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263699 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263731 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263765 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.263895 master-0 kubenswrapper[4167]: I0217 15:01:31.263839 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.264270 master-0 kubenswrapper[4167]: I0217 15:01:31.264228 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.264578 master-0 kubenswrapper[4167]: I0217 15:01:31.264517 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.264824 master-0 kubenswrapper[4167]: I0217 15:01:31.264730 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.265347 master-0 kubenswrapper[4167]: I0217 15:01:31.265302 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.265915 master-0 kubenswrapper[4167]: I0217 15:01:31.265854 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.287245 master-0 kubenswrapper[4167]: I0217 15:01:31.287166 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.357962 master-0 kubenswrapper[4167]: I0217 15:01:31.357823 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:01:31.371058 master-0 kubenswrapper[4167]: W0217 15:01:31.370971 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb153362_0abb_4aad_8975_532f6e72d032.slice/crio-d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979 WatchSource:0}: Error finding container d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979: Status 404 returned error can't find the container with id d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979 Feb 17 15:01:31.622571 master-0 kubenswrapper[4167]: I0217 15:01:31.622492 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9r5rl" event={"ID":"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9","Type":"ContainerStarted","Data":"1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c"} Feb 17 15:01:31.623642 master-0 kubenswrapper[4167]: I0217 15:01:31.623564 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerStarted","Data":"d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979"} Feb 17 15:01:31.827451 master-0 kubenswrapper[4167]: I0217 15:01:31.827161 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bnllz"] Feb 17 15:01:31.827752 master-0 kubenswrapper[4167]: I0217 15:01:31.827679 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:31.827824 master-0 kubenswrapper[4167]: E0217 15:01:31.827777 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:31.968879 master-0 kubenswrapper[4167]: I0217 15:01:31.968677 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:31.968879 master-0 kubenswrapper[4167]: I0217 15:01:31.968754 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:32.069412 master-0 kubenswrapper[4167]: I0217 15:01:32.069230 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:32.069412 master-0 kubenswrapper[4167]: I0217 15:01:32.069313 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:32.069781 master-0 kubenswrapper[4167]: E0217 15:01:32.069525 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:32.069781 master-0 kubenswrapper[4167]: E0217 15:01:32.069668 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:32.569639745 +0000 UTC m=+85.104304547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:32.089187 master-0 kubenswrapper[4167]: I0217 15:01:32.088920 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:32.574425 master-0 kubenswrapper[4167]: I0217 15:01:32.574280 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:32.574753 master-0 kubenswrapper[4167]: E0217 15:01:32.574471 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:32.574753 master-0 kubenswrapper[4167]: E0217 15:01:32.574557 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:33.574539116 +0000 UTC m=+86.109203938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:33.583478 master-0 kubenswrapper[4167]: I0217 15:01:33.583364 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:33.584199 master-0 kubenswrapper[4167]: E0217 15:01:33.583602 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:33.584199 master-0 kubenswrapper[4167]: E0217 15:01:33.583695 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:35.583671404 +0000 UTC m=+88.118336226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:33.864592 master-0 kubenswrapper[4167]: I0217 15:01:33.859743 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:33.864592 master-0 kubenswrapper[4167]: E0217 15:01:33.859891 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:34.633108 master-0 kubenswrapper[4167]: I0217 15:01:34.632947 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerStarted","Data":"2f86c60a93c3453ced4f5b52ce187e665f2ac8baeed7a329b64029f9d992f515"} Feb 17 15:01:35.599058 master-0 kubenswrapper[4167]: I0217 15:01:35.598982 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:35.599269 master-0 kubenswrapper[4167]: E0217 15:01:35.599146 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:35.599269 master-0 kubenswrapper[4167]: E0217 15:01:35.599216 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:39.599193136 +0000 UTC m=+92.133857938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:35.637297 master-0 kubenswrapper[4167]: I0217 15:01:35.637235 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="2f86c60a93c3453ced4f5b52ce187e665f2ac8baeed7a329b64029f9d992f515" exitCode=0 Feb 17 15:01:35.637297 master-0 kubenswrapper[4167]: I0217 15:01:35.637285 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"2f86c60a93c3453ced4f5b52ce187e665f2ac8baeed7a329b64029f9d992f515"} Feb 17 15:01:35.857936 master-0 kubenswrapper[4167]: I0217 15:01:35.857780 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:35.858318 master-0 kubenswrapper[4167]: E0217 15:01:35.857955 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:35.874522 master-0 kubenswrapper[4167]: I0217 15:01:35.874377 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 17 15:01:37.857579 master-0 kubenswrapper[4167]: I0217 15:01:37.857509 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:37.858258 master-0 kubenswrapper[4167]: E0217 15:01:37.857675 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:39.630858 master-0 kubenswrapper[4167]: I0217 15:01:39.630768 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:39.631355 master-0 kubenswrapper[4167]: E0217 15:01:39.631012 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:39.631355 master-0 kubenswrapper[4167]: E0217 15:01:39.631136 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:47.631106381 +0000 UTC m=+100.165771403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:39.857137 master-0 kubenswrapper[4167]: I0217 15:01:39.857025 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:39.857442 master-0 kubenswrapper[4167]: E0217 15:01:39.857218 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:41.858037 master-0 kubenswrapper[4167]: I0217 15:01:41.857966 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:41.858713 master-0 kubenswrapper[4167]: E0217 15:01:41.858120 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:43.233736 master-0 kubenswrapper[4167]: I0217 15:01:43.233574 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=8.233532888 podStartE2EDuration="8.233532888s" podCreationTimestamp="2026-02-17 15:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:01:38.875872142 +0000 UTC m=+91.410536954" watchObservedRunningTime="2026-02-17 15:01:43.233532888 +0000 UTC m=+95.768197690" Feb 17 15:01:43.234904 master-0 kubenswrapper[4167]: I0217 15:01:43.234155 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245"] Feb 17 15:01:43.234904 master-0 kubenswrapper[4167]: I0217 15:01:43.234483 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.237079 master-0 kubenswrapper[4167]: I0217 15:01:43.237020 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:01:43.237488 master-0 kubenswrapper[4167]: I0217 15:01:43.237028 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:01:43.237756 master-0 kubenswrapper[4167]: I0217 15:01:43.237234 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:01:43.242036 master-0 kubenswrapper[4167]: I0217 15:01:43.241970 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:01:43.245372 master-0 kubenswrapper[4167]: I0217 15:01:43.245324 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:01:43.360188 master-0 kubenswrapper[4167]: I0217 15:01:43.360037 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.360537 master-0 kubenswrapper[4167]: I0217 15:01:43.360240 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.360537 master-0 kubenswrapper[4167]: I0217 15:01:43.360285 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.360537 master-0 kubenswrapper[4167]: I0217 15:01:43.360315 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.442101 master-0 kubenswrapper[4167]: I0217 15:01:43.442059 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4z5g9"] Feb 17 15:01:43.442734 master-0 kubenswrapper[4167]: I0217 15:01:43.442715 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.444249 master-0 kubenswrapper[4167]: I0217 15:01:43.444221 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:01:43.444410 master-0 kubenswrapper[4167]: I0217 15:01:43.444379 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:01:43.460758 master-0 kubenswrapper[4167]: I0217 15:01:43.460717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.460826 master-0 kubenswrapper[4167]: I0217 15:01:43.460765 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.460966 master-0 kubenswrapper[4167]: I0217 15:01:43.460933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.461218 master-0 kubenswrapper[4167]: I0217 15:01:43.461192 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.461501 master-0 kubenswrapper[4167]: I0217 15:01:43.461473 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.461737 master-0 kubenswrapper[4167]: I0217 15:01:43.461714 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.464356 master-0 kubenswrapper[4167]: I0217 15:01:43.464324 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.475467 master-0 kubenswrapper[4167]: I0217 15:01:43.475288 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.555868 master-0 kubenswrapper[4167]: I0217 15:01:43.555727 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:01:43.561819 master-0 kubenswrapper[4167]: I0217 15:01:43.561775 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.561890 master-0 kubenswrapper[4167]: I0217 15:01:43.561825 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.561890 master-0 kubenswrapper[4167]: I0217 15:01:43.561859 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.561950 master-0 kubenswrapper[4167]: I0217 15:01:43.561893 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.561950 master-0 kubenswrapper[4167]: I0217 15:01:43.561926 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562183 master-0 kubenswrapper[4167]: I0217 15:01:43.562092 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562230 master-0 kubenswrapper[4167]: I0217 15:01:43.562206 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562279 master-0 kubenswrapper[4167]: I0217 15:01:43.562254 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562411 master-0 kubenswrapper[4167]: I0217 15:01:43.562314 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562447 master-0 kubenswrapper[4167]: I0217 15:01:43.562433 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbh8t\" (UniqueName: \"kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562530 master-0 kubenswrapper[4167]: I0217 15:01:43.562511 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562564 master-0 kubenswrapper[4167]: I0217 15:01:43.562550 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562596 master-0 kubenswrapper[4167]: I0217 15:01:43.562582 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562640 master-0 kubenswrapper[4167]: I0217 15:01:43.562616 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562680 master-0 kubenswrapper[4167]: I0217 15:01:43.562657 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562709 master-0 kubenswrapper[4167]: I0217 15:01:43.562690 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562737 master-0 kubenswrapper[4167]: I0217 15:01:43.562722 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562843 master-0 kubenswrapper[4167]: I0217 15:01:43.562790 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562878 master-0 kubenswrapper[4167]: I0217 15:01:43.562853 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.562907 master-0 kubenswrapper[4167]: I0217 15:01:43.562893 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.663795 master-0 kubenswrapper[4167]: I0217 15:01:43.663714 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.663795 master-0 kubenswrapper[4167]: I0217 15:01:43.663783 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663825 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663838 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663893 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663918 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663916 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.663987 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664061 master-0 kubenswrapper[4167]: I0217 15:01:43.664051 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664087 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664107 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664132 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664155 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664224 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664340 master-0 kubenswrapper[4167]: I0217 15:01:43.664241 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664585 master-0 kubenswrapper[4167]: I0217 15:01:43.664339 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664585 master-0 kubenswrapper[4167]: I0217 15:01:43.664394 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664585 master-0 kubenswrapper[4167]: I0217 15:01:43.664498 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664585 master-0 kubenswrapper[4167]: I0217 15:01:43.664545 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664580 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664633 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664656 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664601 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664723 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664765 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664772 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbh8t\" (UniqueName: \"kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664830 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.664877 master-0 kubenswrapper[4167]: I0217 15:01:43.664772 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.664887 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.664917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.664942 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.664965 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.664965 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.665014 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.665030 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.665007 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.665078 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.665862 master-0 kubenswrapper[4167]: I0217 15:01:43.665393 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.671751 master-0 kubenswrapper[4167]: I0217 15:01:43.671722 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.684167 master-0 kubenswrapper[4167]: I0217 15:01:43.684132 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbh8t\" (UniqueName: \"kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t\") pod \"ovnkube-node-4z5g9\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.754384 master-0 kubenswrapper[4167]: I0217 15:01:43.754279 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:01:43.857998 master-0 kubenswrapper[4167]: I0217 15:01:43.857851 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:43.857998 master-0 kubenswrapper[4167]: E0217 15:01:43.857965 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:44.476574 master-0 kubenswrapper[4167]: W0217 15:01:44.476344 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e1b8a0_751b_4568_8a73_f0ea3dadf709.slice/crio-8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175 WatchSource:0}: Error finding container 8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175: Status 404 returned error can't find the container with id 8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175 Feb 17 15:01:44.477467 master-0 kubenswrapper[4167]: W0217 15:01:44.477405 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e31afc_79d5_46f4_9835_0fd11da9465f.slice/crio-c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76 WatchSource:0}: Error finding container c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76: Status 404 returned error can't find the container with id c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76 Feb 17 15:01:44.574303 master-0 kubenswrapper[4167]: I0217 15:01:44.573927 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:01:44.574398 master-0 kubenswrapper[4167]: E0217 15:01:44.574138 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:44.574398 master-0 kubenswrapper[4167]: E0217 15:01:44.574382 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:16.57436281 +0000 UTC m=+129.109027612 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:01:44.661828 master-0 kubenswrapper[4167]: I0217 15:01:44.661783 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerStarted","Data":"ecd77d78fcca655bc8210302308e24b74646b466ebece2fff52e85f8b57c4842"} Feb 17 15:01:44.663175 master-0 kubenswrapper[4167]: I0217 15:01:44.663114 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175"} Feb 17 15:01:44.666039 master-0 kubenswrapper[4167]: I0217 15:01:44.665964 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9r5rl" event={"ID":"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9","Type":"ContainerStarted","Data":"b8501105ededa446f7def6f3713363f03927f0ad8e00dc9d6680d21c8fe86d29"} Feb 17 15:01:44.667417 master-0 kubenswrapper[4167]: I0217 15:01:44.667391 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerStarted","Data":"2707bfc359ad25492b12c102b09ae9fdecf6e4a361e8296d8808f71e5c23dea1"} Feb 17 15:01:44.667417 master-0 kubenswrapper[4167]: I0217 15:01:44.667416 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerStarted","Data":"c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76"} Feb 17 15:01:45.675469 master-0 kubenswrapper[4167]: I0217 15:01:45.675404 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="ecd77d78fcca655bc8210302308e24b74646b466ebece2fff52e85f8b57c4842" exitCode=0 Feb 17 15:01:45.676131 master-0 kubenswrapper[4167]: I0217 15:01:45.676026 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"ecd77d78fcca655bc8210302308e24b74646b466ebece2fff52e85f8b57c4842"} Feb 17 15:01:45.693431 master-0 kubenswrapper[4167]: I0217 15:01:45.693372 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9r5rl" podStartSLOduration=2.335138013 podStartE2EDuration="15.693329379s" podCreationTimestamp="2026-02-17 15:01:30 +0000 UTC" firstStartedPulling="2026-02-17 15:01:31.187169596 +0000 UTC m=+83.721834428" lastFinishedPulling="2026-02-17 15:01:44.545360992 +0000 UTC m=+97.080025794" observedRunningTime="2026-02-17 15:01:44.6908202 +0000 UTC m=+97.225485002" watchObservedRunningTime="2026-02-17 15:01:45.693329379 +0000 UTC m=+98.227994181" Feb 17 15:01:45.857057 master-0 kubenswrapper[4167]: I0217 15:01:45.857002 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:45.857223 master-0 kubenswrapper[4167]: E0217 15:01:45.857157 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:46.457886 master-0 kubenswrapper[4167]: I0217 15:01:46.457088 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-f25s7"] Feb 17 15:01:46.457886 master-0 kubenswrapper[4167]: I0217 15:01:46.457395 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:46.457886 master-0 kubenswrapper[4167]: E0217 15:01:46.457448 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:46.592525 master-0 kubenswrapper[4167]: I0217 15:01:46.592475 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:46.693243 master-0 kubenswrapper[4167]: I0217 15:01:46.693157 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:46.708813 master-0 kubenswrapper[4167]: E0217 15:01:46.708705 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:01:46.708813 master-0 kubenswrapper[4167]: E0217 15:01:46.708743 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:01:46.708813 master-0 kubenswrapper[4167]: E0217 15:01:46.708763 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:46.709072 master-0 kubenswrapper[4167]: E0217 15:01:46.708833 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:47.208813448 +0000 UTC m=+99.743478270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:47.299751 master-0 kubenswrapper[4167]: I0217 15:01:47.299693 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:47.299903 master-0 kubenswrapper[4167]: E0217 15:01:47.299860 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:01:47.299903 master-0 kubenswrapper[4167]: E0217 15:01:47.299881 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:01:47.299903 master-0 kubenswrapper[4167]: E0217 15:01:47.299894 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:47.299999 master-0 kubenswrapper[4167]: E0217 15:01:47.299953 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:48.299936774 +0000 UTC m=+100.834601576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:47.684451 master-0 kubenswrapper[4167]: I0217 15:01:47.684345 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="a8fe5731cc729bce660d47070861b2907343fcae8bee470838edf68c6e2b5e34" exitCode=0 Feb 17 15:01:47.684451 master-0 kubenswrapper[4167]: I0217 15:01:47.684411 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"a8fe5731cc729bce660d47070861b2907343fcae8bee470838edf68c6e2b5e34"} Feb 17 15:01:47.703066 master-0 kubenswrapper[4167]: I0217 15:01:47.703004 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:47.703761 master-0 kubenswrapper[4167]: E0217 15:01:47.703186 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:47.703761 master-0 kubenswrapper[4167]: E0217 15:01:47.703262 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:03.703245347 +0000 UTC m=+116.237910149 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:01:47.858146 master-0 kubenswrapper[4167]: I0217 15:01:47.858061 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:47.858500 master-0 kubenswrapper[4167]: I0217 15:01:47.858093 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:47.858500 master-0 kubenswrapper[4167]: E0217 15:01:47.858265 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:47.858500 master-0 kubenswrapper[4167]: E0217 15:01:47.858354 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:48.309903 master-0 kubenswrapper[4167]: I0217 15:01:48.309838 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:48.310151 master-0 kubenswrapper[4167]: E0217 15:01:48.310070 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:01:48.310151 master-0 kubenswrapper[4167]: E0217 15:01:48.310129 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:01:48.310239 master-0 kubenswrapper[4167]: E0217 15:01:48.310153 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:48.310279 master-0 kubenswrapper[4167]: E0217 15:01:48.310257 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:50.310229087 +0000 UTC m=+102.844893929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:48.917817 master-0 kubenswrapper[4167]: W0217 15:01:48.917756 4167 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 17 15:01:48.918564 master-0 kubenswrapper[4167]: I0217 15:01:48.918477 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 17 15:01:49.037445 master-0 kubenswrapper[4167]: I0217 15:01:49.037401 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-xwftw"] Feb 17 15:01:49.038237 master-0 kubenswrapper[4167]: I0217 15:01:49.038220 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.040411 master-0 kubenswrapper[4167]: I0217 15:01:49.040182 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:01:49.040411 master-0 kubenswrapper[4167]: I0217 15:01:49.040278 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:01:49.040601 master-0 kubenswrapper[4167]: I0217 15:01:49.040580 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:01:49.040835 master-0 kubenswrapper[4167]: I0217 15:01:49.040816 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:01:49.041685 master-0 kubenswrapper[4167]: I0217 15:01:49.041543 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:01:49.054033 master-0 kubenswrapper[4167]: I0217 15:01:49.053947 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.053928399 podStartE2EDuration="1.053928399s" podCreationTimestamp="2026-02-17 15:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:01:49.053269504 +0000 UTC m=+101.587934306" watchObservedRunningTime="2026-02-17 15:01:49.053928399 +0000 UTC m=+101.588593211" Feb 17 15:01:49.117061 master-0 kubenswrapper[4167]: I0217 15:01:49.117022 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.117207 master-0 kubenswrapper[4167]: I0217 15:01:49.117134 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.117207 master-0 kubenswrapper[4167]: I0217 15:01:49.117160 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.117207 master-0 kubenswrapper[4167]: I0217 15:01:49.117183 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.217788 master-0 kubenswrapper[4167]: I0217 15:01:49.217686 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.217788 master-0 kubenswrapper[4167]: I0217 15:01:49.217729 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.217788 master-0 kubenswrapper[4167]: I0217 15:01:49.217749 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.217788 master-0 kubenswrapper[4167]: I0217 15:01:49.217765 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.218535 master-0 kubenswrapper[4167]: E0217 15:01:49.218514 4167 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 17 15:01:49.218671 master-0 kubenswrapper[4167]: E0217 15:01:49.218563 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert podName:7c6b911d-8db2-48e8-bce9-d4bcde1f55a0 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:49.718549805 +0000 UTC m=+102.253214597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert") pod "network-node-identity-xwftw" (UID: "7c6b911d-8db2-48e8-bce9-d4bcde1f55a0") : secret "network-node-identity-cert" not found Feb 17 15:01:49.219330 master-0 kubenswrapper[4167]: I0217 15:01:49.219307 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.219413 master-0 kubenswrapper[4167]: I0217 15:01:49.219364 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.236688 master-0 kubenswrapper[4167]: I0217 15:01:49.236652 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.691752 master-0 kubenswrapper[4167]: I0217 15:01:49.691684 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="921ee0fd3551059043b76ac59a478c682da16c6ee7724deecc9c4ab0ac65da91" exitCode=0 Feb 17 15:01:49.691752 master-0 kubenswrapper[4167]: I0217 15:01:49.691744 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"921ee0fd3551059043b76ac59a478c682da16c6ee7724deecc9c4ab0ac65da91"} Feb 17 15:01:49.720746 master-0 kubenswrapper[4167]: I0217 15:01:49.720706 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.726118 master-0 kubenswrapper[4167]: I0217 15:01:49.726085 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:49.858158 master-0 kubenswrapper[4167]: I0217 15:01:49.858105 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:49.858376 master-0 kubenswrapper[4167]: I0217 15:01:49.858159 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:49.858376 master-0 kubenswrapper[4167]: E0217 15:01:49.858253 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:49.858580 master-0 kubenswrapper[4167]: E0217 15:01:49.858533 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:49.998817 master-0 kubenswrapper[4167]: I0217 15:01:49.998678 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:01:50.008440 master-0 kubenswrapper[4167]: W0217 15:01:50.008386 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6b911d_8db2_48e8_bce9_d4bcde1f55a0.slice/crio-5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223 WatchSource:0}: Error finding container 5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223: Status 404 returned error can't find the container with id 5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223 Feb 17 15:01:50.326448 master-0 kubenswrapper[4167]: I0217 15:01:50.326286 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:50.326652 master-0 kubenswrapper[4167]: E0217 15:01:50.326550 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:01:50.326652 master-0 kubenswrapper[4167]: E0217 15:01:50.326592 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:01:50.326652 master-0 kubenswrapper[4167]: E0217 15:01:50.326617 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:50.326748 master-0 kubenswrapper[4167]: E0217 15:01:50.326712 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:01:54.326680381 +0000 UTC m=+106.861345223 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:50.695077 master-0 kubenswrapper[4167]: I0217 15:01:50.694959 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerStarted","Data":"5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223"} Feb 17 15:01:51.857428 master-0 kubenswrapper[4167]: I0217 15:01:51.857338 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:51.859821 master-0 kubenswrapper[4167]: I0217 15:01:51.857371 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:51.859821 master-0 kubenswrapper[4167]: E0217 15:01:51.857576 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:51.859821 master-0 kubenswrapper[4167]: E0217 15:01:51.857670 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:53.857248 master-0 kubenswrapper[4167]: I0217 15:01:53.857197 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:53.858033 master-0 kubenswrapper[4167]: I0217 15:01:53.857292 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:53.858033 master-0 kubenswrapper[4167]: E0217 15:01:53.857315 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:53.858033 master-0 kubenswrapper[4167]: E0217 15:01:53.857518 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:54.359008 master-0 kubenswrapper[4167]: I0217 15:01:54.358914 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:54.359519 master-0 kubenswrapper[4167]: E0217 15:01:54.359277 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:01:54.359519 master-0 kubenswrapper[4167]: E0217 15:01:54.359348 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:01:54.359519 master-0 kubenswrapper[4167]: E0217 15:01:54.359365 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:54.359651 master-0 kubenswrapper[4167]: E0217 15:01:54.359540 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:02.359511555 +0000 UTC m=+114.894176377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:01:55.857994 master-0 kubenswrapper[4167]: I0217 15:01:55.857926 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:55.858876 master-0 kubenswrapper[4167]: E0217 15:01:55.858054 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:55.858876 master-0 kubenswrapper[4167]: I0217 15:01:55.858223 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:55.858876 master-0 kubenswrapper[4167]: E0217 15:01:55.858385 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:57.857698 master-0 kubenswrapper[4167]: I0217 15:01:57.857644 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:57.858225 master-0 kubenswrapper[4167]: I0217 15:01:57.857715 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:57.858225 master-0 kubenswrapper[4167]: E0217 15:01:57.857826 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:57.858225 master-0 kubenswrapper[4167]: E0217 15:01:57.857916 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:01:59.857730 master-0 kubenswrapper[4167]: I0217 15:01:59.857671 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:01:59.858405 master-0 kubenswrapper[4167]: I0217 15:01:59.857822 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:01:59.858405 master-0 kubenswrapper[4167]: E0217 15:01:59.857843 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:01:59.858405 master-0 kubenswrapper[4167]: E0217 15:01:59.858023 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:01.777919 master-0 kubenswrapper[4167]: I0217 15:02:01.777818 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 17 15:02:01.857665 master-0 kubenswrapper[4167]: I0217 15:02:01.857604 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:01.857992 master-0 kubenswrapper[4167]: I0217 15:02:01.857611 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:01.857992 master-0 kubenswrapper[4167]: E0217 15:02:01.857884 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:01.857992 master-0 kubenswrapper[4167]: E0217 15:02:01.857910 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:02.430060 master-0 kubenswrapper[4167]: I0217 15:02:02.429882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:02.430553 master-0 kubenswrapper[4167]: E0217 15:02:02.430166 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:02:02.430553 master-0 kubenswrapper[4167]: E0217 15:02:02.430228 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:02:02.430553 master-0 kubenswrapper[4167]: E0217 15:02:02.430253 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:02:02.430553 master-0 kubenswrapper[4167]: E0217 15:02:02.430373 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:18.430334892 +0000 UTC m=+130.964999724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:02:03.742384 master-0 kubenswrapper[4167]: I0217 15:02:03.742313 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:03.742990 master-0 kubenswrapper[4167]: E0217 15:02:03.742496 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:02:03.742990 master-0 kubenswrapper[4167]: E0217 15:02:03.742555 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:35.742537154 +0000 UTC m=+148.277201956 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:02:03.857599 master-0 kubenswrapper[4167]: I0217 15:02:03.857506 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:03.857875 master-0 kubenswrapper[4167]: I0217 15:02:03.857604 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:03.857875 master-0 kubenswrapper[4167]: E0217 15:02:03.857651 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:03.857875 master-0 kubenswrapper[4167]: E0217 15:02:03.857811 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:05.857568 master-0 kubenswrapper[4167]: I0217 15:02:05.857486 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:05.858441 master-0 kubenswrapper[4167]: I0217 15:02:05.857575 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:05.858441 master-0 kubenswrapper[4167]: E0217 15:02:05.857730 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:05.858441 master-0 kubenswrapper[4167]: E0217 15:02:05.857894 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:06.679497 master-0 kubenswrapper[4167]: I0217 15:02:06.679380 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 17 15:02:07.747431 master-0 kubenswrapper[4167]: I0217 15:02:07.746894 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerStarted","Data":"58ed4f24a4a8563ec3660532e43504b78aecdeaa56673d4b14d15679424a7551"} Feb 17 15:02:07.749484 master-0 kubenswrapper[4167]: I0217 15:02:07.749394 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" exitCode=0 Feb 17 15:02:07.749602 master-0 kubenswrapper[4167]: I0217 15:02:07.749491 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} Feb 17 15:02:07.752151 master-0 kubenswrapper[4167]: I0217 15:02:07.752087 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerStarted","Data":"a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab"} Feb 17 15:02:07.756876 master-0 kubenswrapper[4167]: I0217 15:02:07.756830 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerStarted","Data":"55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902"} Feb 17 15:02:07.756876 master-0 kubenswrapper[4167]: I0217 15:02:07.756874 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerStarted","Data":"c29a0045f98de0eaf7f82dbc4073260459c891479eaa5f5615d8d6bc94e6b3a2"} Feb 17 15:02:07.794551 master-0 kubenswrapper[4167]: I0217 15:02:07.794406 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.7943886020000002 podStartE2EDuration="1.794388602s" podCreationTimestamp="2026-02-17 15:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:02:07.792623002 +0000 UTC m=+120.327287804" watchObservedRunningTime="2026-02-17 15:02:07.794388602 +0000 UTC m=+120.329053404" Feb 17 15:02:07.831090 master-0 kubenswrapper[4167]: I0217 15:02:07.830969 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=6.830950208 podStartE2EDuration="6.830950208s" podCreationTimestamp="2026-02-17 15:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:02:07.805675784 +0000 UTC m=+120.340340586" watchObservedRunningTime="2026-02-17 15:02:07.830950208 +0000 UTC m=+120.365615010" Feb 17 15:02:07.841056 master-0 kubenswrapper[4167]: I0217 15:02:07.840950 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-xwftw" podStartSLOduration=1.453267298 podStartE2EDuration="18.840912551s" podCreationTimestamp="2026-02-17 15:01:49 +0000 UTC" firstStartedPulling="2026-02-17 15:01:50.009751297 +0000 UTC m=+102.544416099" lastFinishedPulling="2026-02-17 15:02:07.39739655 +0000 UTC m=+119.932061352" observedRunningTime="2026-02-17 15:02:07.840643755 +0000 UTC m=+120.375308577" watchObservedRunningTime="2026-02-17 15:02:07.840912551 +0000 UTC m=+120.375577353" Feb 17 15:02:07.857857 master-0 kubenswrapper[4167]: I0217 15:02:07.857597 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:07.857857 master-0 kubenswrapper[4167]: E0217 15:02:07.857703 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:07.857857 master-0 kubenswrapper[4167]: I0217 15:02:07.857709 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:07.858114 master-0 kubenswrapper[4167]: E0217 15:02:07.858083 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:08.668020 master-0 kubenswrapper[4167]: E0217 15:02:08.667678 4167 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 15:02:08.764049 master-0 kubenswrapper[4167]: I0217 15:02:08.763977 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="58ed4f24a4a8563ec3660532e43504b78aecdeaa56673d4b14d15679424a7551" exitCode=0 Feb 17 15:02:08.764853 master-0 kubenswrapper[4167]: I0217 15:02:08.764042 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"58ed4f24a4a8563ec3660532e43504b78aecdeaa56673d4b14d15679424a7551"} Feb 17 15:02:08.772004 master-0 kubenswrapper[4167]: I0217 15:02:08.771951 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} Feb 17 15:02:08.772095 master-0 kubenswrapper[4167]: I0217 15:02:08.772015 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} Feb 17 15:02:08.772095 master-0 kubenswrapper[4167]: I0217 15:02:08.772038 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} Feb 17 15:02:08.772095 master-0 kubenswrapper[4167]: I0217 15:02:08.772057 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} Feb 17 15:02:08.772095 master-0 kubenswrapper[4167]: I0217 15:02:08.772075 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:08.772095 master-0 kubenswrapper[4167]: I0217 15:02:08.772093 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:08.789953 master-0 kubenswrapper[4167]: I0217 15:02:08.789866 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" podStartSLOduration=3.082298462 podStartE2EDuration="25.789778493s" podCreationTimestamp="2026-02-17 15:01:43 +0000 UTC" firstStartedPulling="2026-02-17 15:01:44.645761364 +0000 UTC m=+97.180426166" lastFinishedPulling="2026-02-17 15:02:07.353241395 +0000 UTC m=+119.887906197" observedRunningTime="2026-02-17 15:02:07.853973832 +0000 UTC m=+120.388638654" watchObservedRunningTime="2026-02-17 15:02:08.789778493 +0000 UTC m=+121.324443325" Feb 17 15:02:08.891399 master-0 kubenswrapper[4167]: E0217 15:02:08.891312 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:02:09.781505 master-0 kubenswrapper[4167]: I0217 15:02:09.781339 4167 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="b12f57b0bcc09e05fc64e8bd7a3e3439eada3a066486077463244aa7f48a9765" exitCode=0 Feb 17 15:02:09.781505 master-0 kubenswrapper[4167]: I0217 15:02:09.781406 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerDied","Data":"b12f57b0bcc09e05fc64e8bd7a3e3439eada3a066486077463244aa7f48a9765"} Feb 17 15:02:09.858143 master-0 kubenswrapper[4167]: I0217 15:02:09.858067 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:09.858447 master-0 kubenswrapper[4167]: E0217 15:02:09.858263 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:09.858659 master-0 kubenswrapper[4167]: I0217 15:02:09.858102 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:09.859035 master-0 kubenswrapper[4167]: E0217 15:02:09.858986 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:10.793959 master-0 kubenswrapper[4167]: I0217 15:02:10.793498 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} Feb 17 15:02:10.801440 master-0 kubenswrapper[4167]: I0217 15:02:10.801388 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9nv95" event={"ID":"fb153362-0abb-4aad-8975-532f6e72d032","Type":"ContainerStarted","Data":"f7d4d9b8850c768968d5f350990b0b2da3d960b1070bb64bcf7e80c62e9f3c15"} Feb 17 15:02:10.999306 master-0 kubenswrapper[4167]: I0217 15:02:10.999203 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9nv95" podStartSLOduration=3.923464612 podStartE2EDuration="39.999175303s" podCreationTimestamp="2026-02-17 15:01:31 +0000 UTC" firstStartedPulling="2026-02-17 15:01:31.373367523 +0000 UTC m=+83.908032325" lastFinishedPulling="2026-02-17 15:02:07.449078214 +0000 UTC m=+119.983743016" observedRunningTime="2026-02-17 15:02:10.828618226 +0000 UTC m=+123.363283088" watchObservedRunningTime="2026-02-17 15:02:10.999175303 +0000 UTC m=+123.533840145" Feb 17 15:02:11.000018 master-0 kubenswrapper[4167]: I0217 15:02:10.999980 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4z5g9"] Feb 17 15:02:11.857990 master-0 kubenswrapper[4167]: I0217 15:02:11.857882 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:11.859081 master-0 kubenswrapper[4167]: I0217 15:02:11.858071 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:11.859081 master-0 kubenswrapper[4167]: E0217 15:02:11.858093 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:11.859081 master-0 kubenswrapper[4167]: E0217 15:02:11.858250 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:12.817359 master-0 kubenswrapper[4167]: I0217 15:02:12.817289 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerStarted","Data":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} Feb 17 15:02:12.817713 master-0 kubenswrapper[4167]: I0217 15:02:12.817487 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-controller" containerID="cri-o://41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" gracePeriod=30 Feb 17 15:02:12.817713 master-0 kubenswrapper[4167]: I0217 15:02:12.817539 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" gracePeriod=30 Feb 17 15:02:12.817713 master-0 kubenswrapper[4167]: I0217 15:02:12.817582 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="sbdb" containerID="cri-o://d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" gracePeriod=30 Feb 17 15:02:12.817713 master-0 kubenswrapper[4167]: I0217 15:02:12.817643 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="nbdb" containerID="cri-o://19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" gracePeriod=30 Feb 17 15:02:12.817713 master-0 kubenswrapper[4167]: I0217 15:02:12.817649 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-node" containerID="cri-o://930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" gracePeriod=30 Feb 17 15:02:12.818018 master-0 kubenswrapper[4167]: I0217 15:02:12.817668 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-acl-logging" containerID="cri-o://c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" gracePeriod=30 Feb 17 15:02:12.824054 master-0 kubenswrapper[4167]: I0217 15:02:12.818220 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="northd" containerID="cri-o://117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" gracePeriod=30 Feb 17 15:02:12.824054 master-0 kubenswrapper[4167]: I0217 15:02:12.818589 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:12.824054 master-0 kubenswrapper[4167]: I0217 15:02:12.821442 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:12.824054 master-0 kubenswrapper[4167]: I0217 15:02:12.821589 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:12.827053 master-0 kubenswrapper[4167]: E0217 15:02:12.825636 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 17 15:02:12.833306 master-0 kubenswrapper[4167]: E0217 15:02:12.831882 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 17 15:02:12.835967 master-0 kubenswrapper[4167]: E0217 15:02:12.835186 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 17 15:02:12.835967 master-0 kubenswrapper[4167]: E0217 15:02:12.835266 4167 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="sbdb" Feb 17 15:02:12.857155 master-0 kubenswrapper[4167]: I0217 15:02:12.853015 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:12.867869 master-0 kubenswrapper[4167]: I0217 15:02:12.867597 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovnkube-controller" containerID="cri-o://da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" gracePeriod=30 Feb 17 15:02:12.898181 master-0 kubenswrapper[4167]: I0217 15:02:12.897863 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" podStartSLOduration=7.044327037 podStartE2EDuration="29.897843708s" podCreationTimestamp="2026-02-17 15:01:43 +0000 UTC" firstStartedPulling="2026-02-17 15:01:44.477836045 +0000 UTC m=+97.012500847" lastFinishedPulling="2026-02-17 15:02:07.331352716 +0000 UTC m=+119.866017518" observedRunningTime="2026-02-17 15:02:12.855031882 +0000 UTC m=+125.389696744" watchObservedRunningTime="2026-02-17 15:02:12.897843708 +0000 UTC m=+125.432508510" Feb 17 15:02:13.253258 master-0 kubenswrapper[4167]: I0217 15:02:13.253200 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovnkube-controller/0.log" Feb 17 15:02:13.255777 master-0 kubenswrapper[4167]: I0217 15:02:13.255726 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/kube-rbac-proxy-ovn-metrics/0.log" Feb 17 15:02:13.256616 master-0 kubenswrapper[4167]: I0217 15:02:13.256585 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/kube-rbac-proxy-node/0.log" Feb 17 15:02:13.257407 master-0 kubenswrapper[4167]: I0217 15:02:13.257263 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovn-acl-logging/0.log" Feb 17 15:02:13.258161 master-0 kubenswrapper[4167]: I0217 15:02:13.258073 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovn-controller/0.log" Feb 17 15:02:13.259146 master-0 kubenswrapper[4167]: I0217 15:02:13.258748 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:13.339088 master-0 kubenswrapper[4167]: I0217 15:02:13.338986 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339088 master-0 kubenswrapper[4167]: I0217 15:02:13.339072 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339088 master-0 kubenswrapper[4167]: I0217 15:02:13.339094 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339088 master-0 kubenswrapper[4167]: I0217 15:02:13.339108 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339122 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339138 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339155 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339168 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339185 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339200 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339221 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339237 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339257 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339276 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339295 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339310 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339324 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339341 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbh8t\" (UniqueName: \"kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339355 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339368 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket\") pod \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\" (UID: \"08e1b8a0-751b-4568-8a73-f0ea3dadf709\") " Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339564 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket" (OuterVolumeSpecName: "log-socket") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.339595 master-0 kubenswrapper[4167]: I0217 15:02:13.339593 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.339743 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.339845 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.339898 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340052 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340376 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340442 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash" (OuterVolumeSpecName: "host-slash") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340516 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340517 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340578 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340619 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340609 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log" (OuterVolumeSpecName: "node-log") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340665 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340707 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340731 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.341066 master-0 kubenswrapper[4167]: I0217 15:02:13.340727 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.344615 master-0 kubenswrapper[4167]: I0217 15:02:13.344546 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:02:13.345178 master-0 kubenswrapper[4167]: I0217 15:02:13.345114 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t" (OuterVolumeSpecName: "kube-api-access-rbh8t") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "kube-api-access-rbh8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:02:13.349640 master-0 kubenswrapper[4167]: I0217 15:02:13.349544 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "08e1b8a0-751b-4568-8a73-f0ea3dadf709" (UID: "08e1b8a0-751b-4568-8a73-f0ea3dadf709"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:02:13.426231 master-0 kubenswrapper[4167]: I0217 15:02:13.426154 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vdgrn"] Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426355 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kubecfg-setup" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426379 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kubecfg-setup" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426392 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="sbdb" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426400 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="sbdb" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426410 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-controller" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426419 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-controller" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426430 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-acl-logging" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426438 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-acl-logging" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426446 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-node" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426475 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-node" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: E0217 15:02:13.426483 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovnkube-controller" Feb 17 15:02:13.426482 master-0 kubenswrapper[4167]: I0217 15:02:13.426492 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovnkube-controller" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: E0217 15:02:13.426501 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426509 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: E0217 15:02:13.426518 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="northd" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426525 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="northd" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: E0217 15:02:13.426534 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="nbdb" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426541 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="nbdb" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426587 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-acl-logging" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426599 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovnkube-controller" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426609 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="ovn-controller" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426617 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="sbdb" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426625 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="nbdb" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426633 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-node" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426640 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 15:02:13.426877 master-0 kubenswrapper[4167]: I0217 15:02:13.426648 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerName="northd" Feb 17 15:02:13.428044 master-0 kubenswrapper[4167]: I0217 15:02:13.428013 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.439983 master-0 kubenswrapper[4167]: I0217 15:02:13.439890 4167 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.439983 master-0 kubenswrapper[4167]: I0217 15:02:13.439947 4167 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.439983 master-0 kubenswrapper[4167]: I0217 15:02:13.439967 4167 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.439983 master-0 kubenswrapper[4167]: I0217 15:02:13.439986 4167 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440010 4167 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440029 4167 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-node-log\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440041 4167 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440053 4167 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/08e1b8a0-751b-4568-8a73-f0ea3dadf709-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440065 4167 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440077 4167 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440092 4167 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440106 4167 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440118 4167 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440134 4167 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440162 4167 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/08e1b8a0-751b-4568-8a73-f0ea3dadf709-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440181 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbh8t\" (UniqueName: \"kubernetes.io/projected/08e1b8a0-751b-4568-8a73-f0ea3dadf709-kube-api-access-rbh8t\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440197 4167 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440213 4167 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440225 4167 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.440363 master-0 kubenswrapper[4167]: I0217 15:02:13.440237 4167 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08e1b8a0-751b-4568-8a73-f0ea3dadf709-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 17 15:02:13.541652 master-0 kubenswrapper[4167]: I0217 15:02:13.541439 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.541652 master-0 kubenswrapper[4167]: I0217 15:02:13.541524 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.541652 master-0 kubenswrapper[4167]: I0217 15:02:13.541560 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.541652 master-0 kubenswrapper[4167]: I0217 15:02:13.541614 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.541652 master-0 kubenswrapper[4167]: I0217 15:02:13.541648 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541682 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541706 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541730 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541771 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541794 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541818 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541844 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541882 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541903 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542034 master-0 kubenswrapper[4167]: I0217 15:02:13.541944 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542368 master-0 kubenswrapper[4167]: I0217 15:02:13.542040 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542368 master-0 kubenswrapper[4167]: I0217 15:02:13.542120 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542368 master-0 kubenswrapper[4167]: I0217 15:02:13.542174 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542368 master-0 kubenswrapper[4167]: I0217 15:02:13.542222 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.542368 master-0 kubenswrapper[4167]: I0217 15:02:13.542255 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.642933 master-0 kubenswrapper[4167]: I0217 15:02:13.642853 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.642933 master-0 kubenswrapper[4167]: I0217 15:02:13.642916 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643164 master-0 kubenswrapper[4167]: I0217 15:02:13.642949 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643164 master-0 kubenswrapper[4167]: I0217 15:02:13.643022 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643164 master-0 kubenswrapper[4167]: I0217 15:02:13.643081 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643164 master-0 kubenswrapper[4167]: I0217 15:02:13.643111 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643391 master-0 kubenswrapper[4167]: I0217 15:02:13.643217 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643391 master-0 kubenswrapper[4167]: I0217 15:02:13.643289 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643391 master-0 kubenswrapper[4167]: I0217 15:02:13.643345 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643391 master-0 kubenswrapper[4167]: I0217 15:02:13.643385 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643618 master-0 kubenswrapper[4167]: I0217 15:02:13.643416 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643618 master-0 kubenswrapper[4167]: I0217 15:02:13.643501 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643618 master-0 kubenswrapper[4167]: I0217 15:02:13.643545 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643618 master-0 kubenswrapper[4167]: I0217 15:02:13.643579 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643619 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643624 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643671 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643705 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643741 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643778 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.643810 master-0 kubenswrapper[4167]: I0217 15:02:13.643812 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.643852 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.643888 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.643939 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.643976 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.644036 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644138 master-0 kubenswrapper[4167]: I0217 15:02:13.644122 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644188 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644240 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644290 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644325 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644332 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644425 master-0 kubenswrapper[4167]: I0217 15:02:13.644355 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644768 master-0 kubenswrapper[4167]: I0217 15:02:13.644444 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644768 master-0 kubenswrapper[4167]: I0217 15:02:13.644534 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644927 master-0 kubenswrapper[4167]: I0217 15:02:13.644821 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.644987 master-0 kubenswrapper[4167]: I0217 15:02:13.644936 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.645985 master-0 kubenswrapper[4167]: I0217 15:02:13.645934 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.647915 master-0 kubenswrapper[4167]: I0217 15:02:13.647856 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.677349 master-0 kubenswrapper[4167]: I0217 15:02:13.677277 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.747871 master-0 kubenswrapper[4167]: I0217 15:02:13.747765 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:13.763925 master-0 kubenswrapper[4167]: W0217 15:02:13.763882 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a905fb6_17d4_413b_9107_859c804ce906.slice/crio-4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd WatchSource:0}: Error finding container 4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd: Status 404 returned error can't find the container with id 4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd Feb 17 15:02:13.826269 master-0 kubenswrapper[4167]: I0217 15:02:13.826228 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovnkube-controller/0.log" Feb 17 15:02:13.828494 master-0 kubenswrapper[4167]: I0217 15:02:13.828437 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/kube-rbac-proxy-ovn-metrics/0.log" Feb 17 15:02:13.829263 master-0 kubenswrapper[4167]: I0217 15:02:13.829211 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/kube-rbac-proxy-node/0.log" Feb 17 15:02:13.830011 master-0 kubenswrapper[4167]: I0217 15:02:13.829983 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovn-acl-logging/0.log" Feb 17 15:02:13.831187 master-0 kubenswrapper[4167]: I0217 15:02:13.831139 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4z5g9_08e1b8a0-751b-4568-8a73-f0ea3dadf709/ovn-controller/0.log" Feb 17 15:02:13.831800 master-0 kubenswrapper[4167]: I0217 15:02:13.831763 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" exitCode=143 Feb 17 15:02:13.831944 master-0 kubenswrapper[4167]: I0217 15:02:13.831919 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" exitCode=0 Feb 17 15:02:13.832061 master-0 kubenswrapper[4167]: I0217 15:02:13.832038 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" exitCode=0 Feb 17 15:02:13.832166 master-0 kubenswrapper[4167]: I0217 15:02:13.832143 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" exitCode=0 Feb 17 15:02:13.832268 master-0 kubenswrapper[4167]: I0217 15:02:13.832248 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" exitCode=143 Feb 17 15:02:13.832405 master-0 kubenswrapper[4167]: I0217 15:02:13.832382 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" exitCode=143 Feb 17 15:02:13.832551 master-0 kubenswrapper[4167]: I0217 15:02:13.832527 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" exitCode=143 Feb 17 15:02:13.832695 master-0 kubenswrapper[4167]: I0217 15:02:13.832672 4167 generic.go:334] "Generic (PLEG): container finished" podID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" exitCode=143 Feb 17 15:02:13.832861 master-0 kubenswrapper[4167]: I0217 15:02:13.831925 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" Feb 17 15:02:13.833060 master-0 kubenswrapper[4167]: I0217 15:02:13.831826 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} Feb 17 15:02:13.833152 master-0 kubenswrapper[4167]: I0217 15:02:13.833090 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} Feb 17 15:02:13.833152 master-0 kubenswrapper[4167]: I0217 15:02:13.833117 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} Feb 17 15:02:13.833152 master-0 kubenswrapper[4167]: I0217 15:02:13.833140 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} Feb 17 15:02:13.833360 master-0 kubenswrapper[4167]: I0217 15:02:13.833161 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} Feb 17 15:02:13.833360 master-0 kubenswrapper[4167]: I0217 15:02:13.833180 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833340 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833199 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833390 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833404 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833420 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833437 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833451 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833489 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833502 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833513 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} Feb 17 15:02:13.833511 master-0 kubenswrapper[4167]: I0217 15:02:13.833524 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833537 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833548 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833558 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833573 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833590 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833603 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833614 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833626 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833637 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833648 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833659 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833670 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833682 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833696 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4z5g9" event={"ID":"08e1b8a0-751b-4568-8a73-f0ea3dadf709","Type":"ContainerDied","Data":"8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833712 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833725 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833737 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833748 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833759 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833769 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833780 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833790 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833801 4167 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} Feb 17 15:02:13.834053 master-0 kubenswrapper[4167]: I0217 15:02:13.833985 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd"} Feb 17 15:02:13.849256 master-0 kubenswrapper[4167]: I0217 15:02:13.849213 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:13.857140 master-0 kubenswrapper[4167]: I0217 15:02:13.857072 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:13.857140 master-0 kubenswrapper[4167]: I0217 15:02:13.857128 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:13.857394 master-0 kubenswrapper[4167]: E0217 15:02:13.857242 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:13.857480 master-0 kubenswrapper[4167]: E0217 15:02:13.857403 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:13.867079 master-0 kubenswrapper[4167]: I0217 15:02:13.867028 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:13.872088 master-0 kubenswrapper[4167]: I0217 15:02:13.871996 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4z5g9"] Feb 17 15:02:13.878257 master-0 kubenswrapper[4167]: I0217 15:02:13.878206 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4z5g9"] Feb 17 15:02:13.892757 master-0 kubenswrapper[4167]: E0217 15:02:13.892684 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:02:13.935501 master-0 kubenswrapper[4167]: I0217 15:02:13.935435 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:13.949068 master-0 kubenswrapper[4167]: I0217 15:02:13.948773 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:13.960631 master-0 kubenswrapper[4167]: I0217 15:02:13.960586 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:13.973390 master-0 kubenswrapper[4167]: I0217 15:02:13.973357 4167 scope.go:117] "RemoveContainer" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:13.982446 master-0 kubenswrapper[4167]: I0217 15:02:13.982394 4167 scope.go:117] "RemoveContainer" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:13.993942 master-0 kubenswrapper[4167]: I0217 15:02:13.993907 4167 scope.go:117] "RemoveContainer" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.006739 master-0 kubenswrapper[4167]: I0217 15:02:14.006706 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.007029 master-0 kubenswrapper[4167]: E0217 15:02:14.006994 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.007067 master-0 kubenswrapper[4167]: I0217 15:02:14.007030 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} err="failed to get container status \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" Feb 17 15:02:14.007067 master-0 kubenswrapper[4167]: I0217 15:02:14.007057 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.007351 master-0 kubenswrapper[4167]: E0217 15:02:14.007317 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.007393 master-0 kubenswrapper[4167]: I0217 15:02:14.007346 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} err="failed to get container status \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" Feb 17 15:02:14.007393 master-0 kubenswrapper[4167]: I0217 15:02:14.007365 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.007941 master-0 kubenswrapper[4167]: E0217 15:02:14.007904 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.007941 master-0 kubenswrapper[4167]: I0217 15:02:14.007933 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} err="failed to get container status \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" Feb 17 15:02:14.008043 master-0 kubenswrapper[4167]: I0217 15:02:14.007951 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.008392 master-0 kubenswrapper[4167]: E0217 15:02:14.008354 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.008392 master-0 kubenswrapper[4167]: I0217 15:02:14.008383 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} err="failed to get container status \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" Feb 17 15:02:14.008490 master-0 kubenswrapper[4167]: I0217 15:02:14.008401 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.008874 master-0 kubenswrapper[4167]: E0217 15:02:14.008807 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.008952 master-0 kubenswrapper[4167]: I0217 15:02:14.008871 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} err="failed to get container status \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" Feb 17 15:02:14.008952 master-0 kubenswrapper[4167]: I0217 15:02:14.008914 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.009404 master-0 kubenswrapper[4167]: E0217 15:02:14.009370 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.009404 master-0 kubenswrapper[4167]: I0217 15:02:14.009397 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} err="failed to get container status \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" Feb 17 15:02:14.009490 master-0 kubenswrapper[4167]: I0217 15:02:14.009414 4167 scope.go:117] "RemoveContainer" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:14.009905 master-0 kubenswrapper[4167]: E0217 15:02:14.009868 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": container with ID starting with c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48 not found: ID does not exist" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:14.009951 master-0 kubenswrapper[4167]: I0217 15:02:14.009898 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} err="failed to get container status \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": rpc error: code = NotFound desc = could not find container \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": container with ID starting with c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48 not found: ID does not exist" Feb 17 15:02:14.009951 master-0 kubenswrapper[4167]: I0217 15:02:14.009917 4167 scope.go:117] "RemoveContainer" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:14.010317 master-0 kubenswrapper[4167]: E0217 15:02:14.010282 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": container with ID starting with 41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0 not found: ID does not exist" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:14.010317 master-0 kubenswrapper[4167]: I0217 15:02:14.010309 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} err="failed to get container status \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": rpc error: code = NotFound desc = could not find container \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": container with ID starting with 41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0 not found: ID does not exist" Feb 17 15:02:14.010378 master-0 kubenswrapper[4167]: I0217 15:02:14.010326 4167 scope.go:117] "RemoveContainer" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.010767 master-0 kubenswrapper[4167]: E0217 15:02:14.010736 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": container with ID starting with 1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c not found: ID does not exist" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.010802 master-0 kubenswrapper[4167]: I0217 15:02:14.010764 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} err="failed to get container status \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": rpc error: code = NotFound desc = could not find container \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": container with ID starting with 1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c not found: ID does not exist" Feb 17 15:02:14.010802 master-0 kubenswrapper[4167]: I0217 15:02:14.010783 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.011114 master-0 kubenswrapper[4167]: I0217 15:02:14.011063 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} err="failed to get container status \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" Feb 17 15:02:14.011148 master-0 kubenswrapper[4167]: I0217 15:02:14.011111 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.011600 master-0 kubenswrapper[4167]: I0217 15:02:14.011544 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} err="failed to get container status \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" Feb 17 15:02:14.011653 master-0 kubenswrapper[4167]: I0217 15:02:14.011603 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.012048 master-0 kubenswrapper[4167]: I0217 15:02:14.012014 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} err="failed to get container status \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" Feb 17 15:02:14.012048 master-0 kubenswrapper[4167]: I0217 15:02:14.012040 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.012444 master-0 kubenswrapper[4167]: I0217 15:02:14.012410 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} err="failed to get container status \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" Feb 17 15:02:14.012444 master-0 kubenswrapper[4167]: I0217 15:02:14.012437 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.012893 master-0 kubenswrapper[4167]: I0217 15:02:14.012850 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} err="failed to get container status \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" Feb 17 15:02:14.012893 master-0 kubenswrapper[4167]: I0217 15:02:14.012881 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.013269 master-0 kubenswrapper[4167]: I0217 15:02:14.013230 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} err="failed to get container status \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" Feb 17 15:02:14.013269 master-0 kubenswrapper[4167]: I0217 15:02:14.013255 4167 scope.go:117] "RemoveContainer" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:14.013561 master-0 kubenswrapper[4167]: I0217 15:02:14.013524 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} err="failed to get container status \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": rpc error: code = NotFound desc = could not find container \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": container with ID starting with c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48 not found: ID does not exist" Feb 17 15:02:14.013561 master-0 kubenswrapper[4167]: I0217 15:02:14.013553 4167 scope.go:117] "RemoveContainer" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:14.013975 master-0 kubenswrapper[4167]: I0217 15:02:14.013940 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} err="failed to get container status \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": rpc error: code = NotFound desc = could not find container \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": container with ID starting with 41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0 not found: ID does not exist" Feb 17 15:02:14.013975 master-0 kubenswrapper[4167]: I0217 15:02:14.013965 4167 scope.go:117] "RemoveContainer" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.014336 master-0 kubenswrapper[4167]: I0217 15:02:14.014302 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} err="failed to get container status \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": rpc error: code = NotFound desc = could not find container \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": container with ID starting with 1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c not found: ID does not exist" Feb 17 15:02:14.014336 master-0 kubenswrapper[4167]: I0217 15:02:14.014326 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.014728 master-0 kubenswrapper[4167]: I0217 15:02:14.014694 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} err="failed to get container status \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" Feb 17 15:02:14.014728 master-0 kubenswrapper[4167]: I0217 15:02:14.014719 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.015077 master-0 kubenswrapper[4167]: I0217 15:02:14.015044 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} err="failed to get container status \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" Feb 17 15:02:14.015077 master-0 kubenswrapper[4167]: I0217 15:02:14.015069 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.015418 master-0 kubenswrapper[4167]: I0217 15:02:14.015383 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} err="failed to get container status \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" Feb 17 15:02:14.015418 master-0 kubenswrapper[4167]: I0217 15:02:14.015411 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.016054 master-0 kubenswrapper[4167]: I0217 15:02:14.016020 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} err="failed to get container status \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" Feb 17 15:02:14.016054 master-0 kubenswrapper[4167]: I0217 15:02:14.016046 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.016420 master-0 kubenswrapper[4167]: I0217 15:02:14.016367 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} err="failed to get container status \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" Feb 17 15:02:14.016484 master-0 kubenswrapper[4167]: I0217 15:02:14.016417 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.016923 master-0 kubenswrapper[4167]: I0217 15:02:14.016880 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} err="failed to get container status \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" Feb 17 15:02:14.016923 master-0 kubenswrapper[4167]: I0217 15:02:14.016914 4167 scope.go:117] "RemoveContainer" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:14.017338 master-0 kubenswrapper[4167]: I0217 15:02:14.017301 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} err="failed to get container status \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": rpc error: code = NotFound desc = could not find container \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": container with ID starting with c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48 not found: ID does not exist" Feb 17 15:02:14.017338 master-0 kubenswrapper[4167]: I0217 15:02:14.017328 4167 scope.go:117] "RemoveContainer" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:14.017659 master-0 kubenswrapper[4167]: I0217 15:02:14.017610 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} err="failed to get container status \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": rpc error: code = NotFound desc = could not find container \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": container with ID starting with 41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0 not found: ID does not exist" Feb 17 15:02:14.017694 master-0 kubenswrapper[4167]: I0217 15:02:14.017659 4167 scope.go:117] "RemoveContainer" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.018069 master-0 kubenswrapper[4167]: I0217 15:02:14.018035 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} err="failed to get container status \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": rpc error: code = NotFound desc = could not find container \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": container with ID starting with 1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c not found: ID does not exist" Feb 17 15:02:14.018069 master-0 kubenswrapper[4167]: I0217 15:02:14.018063 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.018623 master-0 kubenswrapper[4167]: I0217 15:02:14.018573 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} err="failed to get container status \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" Feb 17 15:02:14.018665 master-0 kubenswrapper[4167]: I0217 15:02:14.018619 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.019019 master-0 kubenswrapper[4167]: I0217 15:02:14.018977 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} err="failed to get container status \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" Feb 17 15:02:14.019019 master-0 kubenswrapper[4167]: I0217 15:02:14.019014 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.019444 master-0 kubenswrapper[4167]: I0217 15:02:14.019409 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} err="failed to get container status \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" Feb 17 15:02:14.019444 master-0 kubenswrapper[4167]: I0217 15:02:14.019437 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.020032 master-0 kubenswrapper[4167]: I0217 15:02:14.019978 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} err="failed to get container status \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" Feb 17 15:02:14.020072 master-0 kubenswrapper[4167]: I0217 15:02:14.020030 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.020334 master-0 kubenswrapper[4167]: I0217 15:02:14.020301 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} err="failed to get container status \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" Feb 17 15:02:14.020334 master-0 kubenswrapper[4167]: I0217 15:02:14.020329 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.020815 master-0 kubenswrapper[4167]: I0217 15:02:14.020757 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} err="failed to get container status \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" Feb 17 15:02:14.020815 master-0 kubenswrapper[4167]: I0217 15:02:14.020808 4167 scope.go:117] "RemoveContainer" containerID="c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48" Feb 17 15:02:14.021248 master-0 kubenswrapper[4167]: I0217 15:02:14.021211 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48"} err="failed to get container status \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": rpc error: code = NotFound desc = could not find container \"c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48\": container with ID starting with c80a727a19ec25e8908e15e9cbae4abb39a9a135076440fd6ae7c6b44d19ca48 not found: ID does not exist" Feb 17 15:02:14.021248 master-0 kubenswrapper[4167]: I0217 15:02:14.021239 4167 scope.go:117] "RemoveContainer" containerID="41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0" Feb 17 15:02:14.021645 master-0 kubenswrapper[4167]: I0217 15:02:14.021612 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0"} err="failed to get container status \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": rpc error: code = NotFound desc = could not find container \"41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0\": container with ID starting with 41d71a72acad775cba6661e9d891b67be865d1a28aa95e0d47173013cf0996c0 not found: ID does not exist" Feb 17 15:02:14.021645 master-0 kubenswrapper[4167]: I0217 15:02:14.021639 4167 scope.go:117] "RemoveContainer" containerID="1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c" Feb 17 15:02:14.022020 master-0 kubenswrapper[4167]: I0217 15:02:14.021986 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c"} err="failed to get container status \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": rpc error: code = NotFound desc = could not find container \"1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c\": container with ID starting with 1743361a23765588598b90c54ab7ba84d9c7e2d1b6ffe07f21b7168af3ea3f2c not found: ID does not exist" Feb 17 15:02:14.022020 master-0 kubenswrapper[4167]: I0217 15:02:14.022009 4167 scope.go:117] "RemoveContainer" containerID="da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2" Feb 17 15:02:14.022364 master-0 kubenswrapper[4167]: I0217 15:02:14.022334 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2"} err="failed to get container status \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": rpc error: code = NotFound desc = could not find container \"da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2\": container with ID starting with da04827ca082a0aeca0ad410ae4ae0e0918c8c65108602e5e4b88aa9cfeaa7b2 not found: ID does not exist" Feb 17 15:02:14.022364 master-0 kubenswrapper[4167]: I0217 15:02:14.022356 4167 scope.go:117] "RemoveContainer" containerID="d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd" Feb 17 15:02:14.022633 master-0 kubenswrapper[4167]: I0217 15:02:14.022598 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd"} err="failed to get container status \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": rpc error: code = NotFound desc = could not find container \"d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd\": container with ID starting with d4467cca7a67cd3368376a0155be0ec462550a01ee9fda7b8bda095d3053affd not found: ID does not exist" Feb 17 15:02:14.022633 master-0 kubenswrapper[4167]: I0217 15:02:14.022628 4167 scope.go:117] "RemoveContainer" containerID="19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f" Feb 17 15:02:14.023044 master-0 kubenswrapper[4167]: I0217 15:02:14.023014 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f"} err="failed to get container status \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": rpc error: code = NotFound desc = could not find container \"19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f\": container with ID starting with 19fbc8b154d4b9d4129fd56efda822b5bc66d78eb893302474e84007be23be5f not found: ID does not exist" Feb 17 15:02:14.023044 master-0 kubenswrapper[4167]: I0217 15:02:14.023036 4167 scope.go:117] "RemoveContainer" containerID="117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101" Feb 17 15:02:14.023347 master-0 kubenswrapper[4167]: I0217 15:02:14.023288 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101"} err="failed to get container status \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": rpc error: code = NotFound desc = could not find container \"117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101\": container with ID starting with 117256dccc0e76a070c12f6e86a8fc6e8a36e9b70eff83de47198fad7e1e4101 not found: ID does not exist" Feb 17 15:02:14.023347 master-0 kubenswrapper[4167]: I0217 15:02:14.023339 4167 scope.go:117] "RemoveContainer" containerID="abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4" Feb 17 15:02:14.023694 master-0 kubenswrapper[4167]: I0217 15:02:14.023657 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4"} err="failed to get container status \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": rpc error: code = NotFound desc = could not find container \"abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4\": container with ID starting with abe44065e551c76b4b830d18504443156b35ad68e25ee90ed4cc91f18dd2fff4 not found: ID does not exist" Feb 17 15:02:14.023694 master-0 kubenswrapper[4167]: I0217 15:02:14.023681 4167 scope.go:117] "RemoveContainer" containerID="930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95" Feb 17 15:02:14.024036 master-0 kubenswrapper[4167]: I0217 15:02:14.023997 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95"} err="failed to get container status \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": rpc error: code = NotFound desc = could not find container \"930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95\": container with ID starting with 930525efbcf95f09df767fb61101858fdfb88e8d74b6085f0aedad7985497b95 not found: ID does not exist" Feb 17 15:02:14.841943 master-0 kubenswrapper[4167]: I0217 15:02:14.841889 4167 generic.go:334] "Generic (PLEG): container finished" podID="9a905fb6-17d4-413b-9107-859c804ce906" containerID="4af044cd84dfd56b4c3319dc9513fdcbc730d3ab6bf935acd230ad188ae43052" exitCode=0 Feb 17 15:02:14.841943 master-0 kubenswrapper[4167]: I0217 15:02:14.841945 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerDied","Data":"4af044cd84dfd56b4c3319dc9513fdcbc730d3ab6bf935acd230ad188ae43052"} Feb 17 15:02:14.866201 master-0 kubenswrapper[4167]: I0217 15:02:14.866143 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e1b8a0-751b-4568-8a73-f0ea3dadf709" path="/var/lib/kubelet/pods/08e1b8a0-751b-4568-8a73-f0ea3dadf709/volumes" Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.849983 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"033ea37752f691a8073b96c2908aeaac21fda1644faf34713a9b6b9b3e49d7ed"} Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.850040 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"637ef91970ef83900363fa551f6c3b88c61188c0797fdb69a0e25bc902858751"} Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.850059 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"e626f6aba4beee6674c7c7c749e690878b02d120f0f563a5f4940d36967bf80e"} Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.850079 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"0d274e1a9986c95da6fd62bf6e49e5c9bf949f6f2a05e811155457706145ef1f"} Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.850093 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"45d90ea235d09adb7caa91b914668fbfe0fb7ff263bae09ad5f51a624b7ca21c"} Feb 17 15:02:15.850179 master-0 kubenswrapper[4167]: I0217 15:02:15.850108 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"e4c1498f02b76e48c469a8fd5b431a7b2eb1ea144cd12bfa431cd716eb1ee78f"} Feb 17 15:02:15.857487 master-0 kubenswrapper[4167]: I0217 15:02:15.857404 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:15.857735 master-0 kubenswrapper[4167]: E0217 15:02:15.857686 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:15.857735 master-0 kubenswrapper[4167]: I0217 15:02:15.857688 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:15.857883 master-0 kubenswrapper[4167]: E0217 15:02:15.857851 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:16.672294 master-0 kubenswrapper[4167]: I0217 15:02:16.672184 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:16.672669 master-0 kubenswrapper[4167]: E0217 15:02:16.672513 4167 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:16.672763 master-0 kubenswrapper[4167]: E0217 15:02:16.672683 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.672645189 +0000 UTC m=+193.207310031 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:17.856957 master-0 kubenswrapper[4167]: I0217 15:02:17.856929 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:17.857935 master-0 kubenswrapper[4167]: I0217 15:02:17.856929 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:17.858093 master-0 kubenswrapper[4167]: E0217 15:02:17.858020 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:17.858831 master-0 kubenswrapper[4167]: E0217 15:02:17.858806 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:17.864245 master-0 kubenswrapper[4167]: I0217 15:02:17.864212 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"3cf1f05840e27d0081bf66c2f0d8a8d3e65b0d450b08fee9d5573e103178a43f"} Feb 17 15:02:18.487979 master-0 kubenswrapper[4167]: I0217 15:02:18.487854 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:18.488677 master-0 kubenswrapper[4167]: E0217 15:02:18.488112 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:02:18.488677 master-0 kubenswrapper[4167]: E0217 15:02:18.488149 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:02:18.488677 master-0 kubenswrapper[4167]: E0217 15:02:18.488168 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bpwhf for pod openshift-network-diagnostics/network-check-target-f25s7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:02:18.488677 master-0 kubenswrapper[4167]: E0217 15:02:18.488262 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf podName:727f20b6-19c7-45eb-a803-6898ecaeffd0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.488236806 +0000 UTC m=+163.022901668 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bpwhf" (UniqueName: "kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf") pod "network-check-target-f25s7" (UID: "727f20b6-19c7-45eb-a803-6898ecaeffd0") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:02:18.893494 master-0 kubenswrapper[4167]: E0217 15:02:18.893190 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:02:19.857367 master-0 kubenswrapper[4167]: I0217 15:02:19.857256 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:19.857721 master-0 kubenswrapper[4167]: I0217 15:02:19.857274 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:19.857721 master-0 kubenswrapper[4167]: E0217 15:02:19.857447 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:19.857721 master-0 kubenswrapper[4167]: E0217 15:02:19.857585 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:20.879143 master-0 kubenswrapper[4167]: I0217 15:02:20.878813 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" event={"ID":"9a905fb6-17d4-413b-9107-859c804ce906","Type":"ContainerStarted","Data":"3856a38440aaf9b4a8b106ce1a2b2a45826baa39c43f0bc258dce18a41cf420d"} Feb 17 15:02:20.880422 master-0 kubenswrapper[4167]: I0217 15:02:20.879510 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:20.880422 master-0 kubenswrapper[4167]: I0217 15:02:20.879557 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:20.880422 master-0 kubenswrapper[4167]: I0217 15:02:20.879585 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:20.916827 master-0 kubenswrapper[4167]: I0217 15:02:20.916689 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" podStartSLOduration=7.916661767 podStartE2EDuration="7.916661767s" podCreationTimestamp="2026-02-17 15:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:02:20.915229453 +0000 UTC m=+133.449894315" watchObservedRunningTime="2026-02-17 15:02:20.916661767 +0000 UTC m=+133.451326609" Feb 17 15:02:20.924372 master-0 kubenswrapper[4167]: I0217 15:02:20.924318 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:20.925300 master-0 kubenswrapper[4167]: I0217 15:02:20.925213 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:21.857309 master-0 kubenswrapper[4167]: I0217 15:02:21.857239 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:21.857583 master-0 kubenswrapper[4167]: E0217 15:02:21.857369 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:21.857583 master-0 kubenswrapper[4167]: I0217 15:02:21.857236 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:21.857675 master-0 kubenswrapper[4167]: E0217 15:02:21.857644 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:22.735126 master-0 kubenswrapper[4167]: I0217 15:02:22.735053 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-f25s7"] Feb 17 15:02:22.735817 master-0 kubenswrapper[4167]: I0217 15:02:22.735164 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:22.735817 master-0 kubenswrapper[4167]: E0217 15:02:22.735254 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:22.744971 master-0 kubenswrapper[4167]: I0217 15:02:22.744928 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bnllz"] Feb 17 15:02:22.745109 master-0 kubenswrapper[4167]: I0217 15:02:22.744994 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:22.745109 master-0 kubenswrapper[4167]: E0217 15:02:22.745079 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:23.895317 master-0 kubenswrapper[4167]: E0217 15:02:23.894822 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:02:24.857222 master-0 kubenswrapper[4167]: I0217 15:02:24.857114 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:24.857447 master-0 kubenswrapper[4167]: E0217 15:02:24.857294 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:24.857447 master-0 kubenswrapper[4167]: I0217 15:02:24.857415 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:24.857724 master-0 kubenswrapper[4167]: E0217 15:02:24.857657 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:26.858026 master-0 kubenswrapper[4167]: I0217 15:02:26.857896 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:26.859203 master-0 kubenswrapper[4167]: I0217 15:02:26.857935 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:26.859203 master-0 kubenswrapper[4167]: E0217 15:02:26.858194 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:26.859203 master-0 kubenswrapper[4167]: E0217 15:02:26.858234 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:28.858132 master-0 kubenswrapper[4167]: I0217 15:02:28.857990 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:28.859294 master-0 kubenswrapper[4167]: E0217 15:02:28.859020 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bnllz" podUID="fce9579e-7383-421e-95dd-8f8b786817f9" Feb 17 15:02:28.859294 master-0 kubenswrapper[4167]: I0217 15:02:28.859145 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:28.859294 master-0 kubenswrapper[4167]: E0217 15:02:28.859246 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f25s7" podUID="727f20b6-19c7-45eb-a803-6898ecaeffd0" Feb 17 15:02:30.858067 master-0 kubenswrapper[4167]: I0217 15:02:30.857974 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:30.858675 master-0 kubenswrapper[4167]: I0217 15:02:30.857977 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:30.861634 master-0 kubenswrapper[4167]: I0217 15:02:30.861606 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:02:30.861684 master-0 kubenswrapper[4167]: I0217 15:02:30.861637 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:02:30.861946 master-0 kubenswrapper[4167]: I0217 15:02:30.861911 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:02:35.788136 master-0 kubenswrapper[4167]: I0217 15:02:35.787627 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:35.788136 master-0 kubenswrapper[4167]: E0217 15:02:35.787843 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:35.789564 master-0 kubenswrapper[4167]: E0217 15:02:35.788207 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:39.788179475 +0000 UTC m=+212.322844277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:36.217752 master-0 kubenswrapper[4167]: I0217 15:02:36.217675 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 17 15:02:36.462315 master-0 kubenswrapper[4167]: I0217 15:02:36.462273 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p"] Feb 17 15:02:36.462984 master-0 kubenswrapper[4167]: I0217 15:02:36.462968 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.469009 master-0 kubenswrapper[4167]: I0217 15:02:36.468889 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj"] Feb 17 15:02:36.469333 master-0 kubenswrapper[4167]: I0217 15:02:36.469296 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9"] Feb 17 15:02:36.469723 master-0 kubenswrapper[4167]: I0217 15:02:36.469691 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.470519 master-0 kubenswrapper[4167]: I0217 15:02:36.469807 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:36.470519 master-0 kubenswrapper[4167]: I0217 15:02:36.470386 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.470519 master-0 kubenswrapper[4167]: I0217 15:02:36.470388 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:02:36.470682 master-0 kubenswrapper[4167]: I0217 15:02:36.470645 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:02:36.477448 master-0 kubenswrapper[4167]: I0217 15:02:36.477362 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 17 15:02:36.477711 master-0 kubenswrapper[4167]: I0217 15:02:36.477681 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:02:36.478279 master-0 kubenswrapper[4167]: I0217 15:02:36.477971 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:02:36.478279 master-0 kubenswrapper[4167]: I0217 15:02:36.478057 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:02:36.478279 master-0 kubenswrapper[4167]: I0217 15:02:36.478061 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.479335 master-0 kubenswrapper[4167]: I0217 15:02:36.479314 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:02:36.480531 master-0 kubenswrapper[4167]: I0217 15:02:36.480486 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.484184 master-0 kubenswrapper[4167]: I0217 15:02:36.484127 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg"] Feb 17 15:02:36.485217 master-0 kubenswrapper[4167]: I0217 15:02:36.484937 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8"] Feb 17 15:02:36.486225 master-0 kubenswrapper[4167]: I0217 15:02:36.485372 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.486225 master-0 kubenswrapper[4167]: I0217 15:02:36.485535 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.486225 master-0 kubenswrapper[4167]: I0217 15:02:36.486096 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk"] Feb 17 15:02:36.486573 master-0 kubenswrapper[4167]: I0217 15:02:36.486543 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.487625 master-0 kubenswrapper[4167]: I0217 15:02:36.487574 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89"] Feb 17 15:02:36.488394 master-0 kubenswrapper[4167]: I0217 15:02:36.488351 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.490642 master-0 kubenswrapper[4167]: I0217 15:02:36.490598 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv"] Feb 17 15:02:36.491121 master-0 kubenswrapper[4167]: I0217 15:02:36.491093 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:02:36.491320 master-0 kubenswrapper[4167]: I0217 15:02:36.491281 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.491636 master-0 kubenswrapper[4167]: I0217 15:02:36.491611 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 17 15:02:36.491933 master-0 kubenswrapper[4167]: I0217 15:02:36.491905 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.492144 master-0 kubenswrapper[4167]: I0217 15:02:36.492098 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs"] Feb 17 15:02:36.492485 master-0 kubenswrapper[4167]: I0217 15:02:36.492446 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.492926 master-0 kubenswrapper[4167]: I0217 15:02:36.492910 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.493125 master-0 kubenswrapper[4167]: I0217 15:02:36.492978 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.493295 master-0 kubenswrapper[4167]: I0217 15:02:36.493256 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 17 15:02:36.493339 master-0 kubenswrapper[4167]: I0217 15:02:36.493308 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.493522 master-0 kubenswrapper[4167]: I0217 15:02:36.493498 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 17 15:02:36.494047 master-0 kubenswrapper[4167]: I0217 15:02:36.494021 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:02:36.494157 master-0 kubenswrapper[4167]: I0217 15:02:36.494144 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:02:36.495054 master-0 kubenswrapper[4167]: I0217 15:02:36.495040 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 17 15:02:36.496527 master-0 kubenswrapper[4167]: I0217 15:02:36.495178 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:02:36.496623 master-0 kubenswrapper[4167]: I0217 15:02:36.495202 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 17 15:02:36.519745 master-0 kubenswrapper[4167]: I0217 15:02:36.519647 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:02:36.523915 master-0 kubenswrapper[4167]: I0217 15:02:36.523859 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-jrdqm"] Feb 17 15:02:36.527020 master-0 kubenswrapper[4167]: I0217 15:02:36.526941 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:02:36.527094 master-0 kubenswrapper[4167]: I0217 15:02:36.527026 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.527497 master-0 kubenswrapper[4167]: I0217 15:02:36.527426 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:02:36.528591 master-0 kubenswrapper[4167]: I0217 15:02:36.528296 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:02:36.539044 master-0 kubenswrapper[4167]: I0217 15:02:36.535901 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:02:36.540133 master-0 kubenswrapper[4167]: I0217 15:02:36.540088 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:02:36.541402 master-0 kubenswrapper[4167]: I0217 15:02:36.540783 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd"] Feb 17 15:02:36.541402 master-0 kubenswrapper[4167]: I0217 15:02:36.541311 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 17 15:02:36.542096 master-0 kubenswrapper[4167]: I0217 15:02:36.541665 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.543362 master-0 kubenswrapper[4167]: I0217 15:02:36.543132 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-lmqrr"] Feb 17 15:02:36.543434 master-0 kubenswrapper[4167]: I0217 15:02:36.543381 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:02:36.543497 master-0 kubenswrapper[4167]: I0217 15:02:36.543431 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.543595 master-0 kubenswrapper[4167]: I0217 15:02:36.543569 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n"] Feb 17 15:02:36.547505 master-0 kubenswrapper[4167]: I0217 15:02:36.544139 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.547505 master-0 kubenswrapper[4167]: I0217 15:02:36.545312 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:02:36.547505 master-0 kubenswrapper[4167]: I0217 15:02:36.546223 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9"] Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.547809 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.548936 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8"] Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.549339 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v"] Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.549669 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph"] Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.549762 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.549778 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.550213 master-0 kubenswrapper[4167]: I0217 15:02:36.550114 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.551103 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh"] Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.551408 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b"] Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.551562 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.552358 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.552486 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9"] Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.552742 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm"] Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.552853 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.553020 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.553193 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.553484 master-0 kubenswrapper[4167]: I0217 15:02:36.553208 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.553984 master-0 kubenswrapper[4167]: I0217 15:02:36.553772 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p"] Feb 17 15:02:36.553984 master-0 kubenswrapper[4167]: I0217 15:02:36.553795 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj"] Feb 17 15:02:36.553984 master-0 kubenswrapper[4167]: I0217 15:02:36.553809 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg"] Feb 17 15:02:36.553984 master-0 kubenswrapper[4167]: I0217 15:02:36.553880 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.557500 master-0 kubenswrapper[4167]: I0217 15:02:36.554284 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9"] Feb 17 15:02:36.557500 master-0 kubenswrapper[4167]: I0217 15:02:36.555168 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8"] Feb 17 15:02:36.561882 master-0 kubenswrapper[4167]: I0217 15:02:36.561799 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89"] Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.562387 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.562681 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.562823 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.566836 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.566926 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.567174 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.567324 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.567359 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:02:36.567722 master-0 kubenswrapper[4167]: I0217 15:02:36.567671 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.568148 master-0 kubenswrapper[4167]: I0217 15:02:36.568093 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:02:36.569664 master-0 kubenswrapper[4167]: I0217 15:02:36.569632 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:02:36.569830 master-0 kubenswrapper[4167]: I0217 15:02:36.569760 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:02:36.569830 master-0 kubenswrapper[4167]: I0217 15:02:36.569780 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk"] Feb 17 15:02:36.570029 master-0 kubenswrapper[4167]: I0217 15:02:36.569990 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-v2h9q"] Feb 17 15:02:36.570860 master-0 kubenswrapper[4167]: I0217 15:02:36.570795 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.571465 master-0 kubenswrapper[4167]: I0217 15:02:36.569865 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.575487 master-0 kubenswrapper[4167]: I0217 15:02:36.571649 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n"] Feb 17 15:02:36.575487 master-0 kubenswrapper[4167]: I0217 15:02:36.572055 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:02:36.575487 master-0 kubenswrapper[4167]: I0217 15:02:36.574948 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh"] Feb 17 15:02:36.578914 master-0 kubenswrapper[4167]: I0217 15:02:36.578862 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:02:36.579489 master-0 kubenswrapper[4167]: I0217 15:02:36.579234 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:02:36.580346 master-0 kubenswrapper[4167]: I0217 15:02:36.579543 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:02:36.580346 master-0 kubenswrapper[4167]: I0217 15:02:36.579714 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:02:36.580346 master-0 kubenswrapper[4167]: I0217 15:02:36.579948 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:02:36.580346 master-0 kubenswrapper[4167]: I0217 15:02:36.580186 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:02:36.580687 master-0 kubenswrapper[4167]: I0217 15:02:36.580623 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:02:36.580843 master-0 kubenswrapper[4167]: I0217 15:02:36.580767 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:02:36.580843 master-0 kubenswrapper[4167]: I0217 15:02:36.580810 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:02:36.580916 master-0 kubenswrapper[4167]: I0217 15:02:36.580773 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:02:36.581040 master-0 kubenswrapper[4167]: I0217 15:02:36.580965 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:02:36.587895 master-0 kubenswrapper[4167]: I0217 15:02:36.587788 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:02:36.591484 master-0 kubenswrapper[4167]: I0217 15:02:36.589341 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm"] Feb 17 15:02:36.591484 master-0 kubenswrapper[4167]: I0217 15:02:36.589426 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b"] Feb 17 15:02:36.591484 master-0 kubenswrapper[4167]: I0217 15:02:36.589483 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9"] Feb 17 15:02:36.591484 master-0 kubenswrapper[4167]: I0217 15:02:36.589501 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v"] Feb 17 15:02:36.591484 master-0 kubenswrapper[4167]: I0217 15:02:36.589518 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv"] Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.600361 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.600933 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.601278 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.603028 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.603098 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.603224 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:02:36.603491 master-0 kubenswrapper[4167]: I0217 15:02:36.603399 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:02:36.603850 master-0 kubenswrapper[4167]: I0217 15:02:36.603582 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd"] Feb 17 15:02:36.603850 master-0 kubenswrapper[4167]: I0217 15:02:36.603642 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8"] Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.603918 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604005 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604076 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604185 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604311 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604359 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604486 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604644 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604661 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.604994 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.605145 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs"] Feb 17 15:02:36.607485 master-0 kubenswrapper[4167]: I0217 15:02:36.607407 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:02:36.607966 master-0 kubenswrapper[4167]: I0217 15:02:36.607597 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-lmqrr"] Feb 17 15:02:36.613494 master-0 kubenswrapper[4167]: I0217 15:02:36.611975 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-jrdqm"] Feb 17 15:02:36.619547 master-0 kubenswrapper[4167]: I0217 15:02:36.619504 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619562 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619592 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619611 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619626 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619673 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.619710 master-0 kubenswrapper[4167]: I0217 15:02:36.619699 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619718 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619735 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619760 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619798 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619825 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619847 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.619871 master-0 kubenswrapper[4167]: I0217 15:02:36.619865 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.619891 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.619920 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.619944 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.619973 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.619994 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.620054 master-0 kubenswrapper[4167]: I0217 15:02:36.620038 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.620202 master-0 kubenswrapper[4167]: I0217 15:02:36.620066 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.620202 master-0 kubenswrapper[4167]: I0217 15:02:36.620096 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.620202 master-0 kubenswrapper[4167]: I0217 15:02:36.620121 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.620202 master-0 kubenswrapper[4167]: I0217 15:02:36.620149 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.620202 master-0 kubenswrapper[4167]: I0217 15:02:36.620182 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620207 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620234 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620255 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620272 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620290 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620311 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.620329 master-0 kubenswrapper[4167]: I0217 15:02:36.620328 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.620526 master-0 kubenswrapper[4167]: I0217 15:02:36.620365 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.620526 master-0 kubenswrapper[4167]: I0217 15:02:36.620405 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.620526 master-0 kubenswrapper[4167]: I0217 15:02:36.620431 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.620526 master-0 kubenswrapper[4167]: I0217 15:02:36.620473 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.620526 master-0 kubenswrapper[4167]: I0217 15:02:36.620507 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.620656 master-0 kubenswrapper[4167]: I0217 15:02:36.620531 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.620656 master-0 kubenswrapper[4167]: I0217 15:02:36.620560 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.620656 master-0 kubenswrapper[4167]: I0217 15:02:36.620585 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.620656 master-0 kubenswrapper[4167]: I0217 15:02:36.620611 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.620656 master-0 kubenswrapper[4167]: I0217 15:02:36.620644 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.620783 master-0 kubenswrapper[4167]: I0217 15:02:36.620662 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.620783 master-0 kubenswrapper[4167]: I0217 15:02:36.620689 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.620783 master-0 kubenswrapper[4167]: I0217 15:02:36.620745 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.620783 master-0 kubenswrapper[4167]: I0217 15:02:36.620777 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.620889 master-0 kubenswrapper[4167]: I0217 15:02:36.620808 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.620889 master-0 kubenswrapper[4167]: I0217 15:02:36.620834 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.620889 master-0 kubenswrapper[4167]: I0217 15:02:36.620863 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.620967 master-0 kubenswrapper[4167]: I0217 15:02:36.620894 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.620997 master-0 kubenswrapper[4167]: I0217 15:02:36.620963 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.620997 master-0 kubenswrapper[4167]: I0217 15:02:36.620990 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.621051 master-0 kubenswrapper[4167]: I0217 15:02:36.621022 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.621080 master-0 kubenswrapper[4167]: I0217 15:02:36.621058 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.621156 master-0 kubenswrapper[4167]: I0217 15:02:36.621123 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.621198 master-0 kubenswrapper[4167]: I0217 15:02:36.621163 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.621227 master-0 kubenswrapper[4167]: I0217 15:02:36.621195 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.621254 master-0 kubenswrapper[4167]: I0217 15:02:36.621228 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.621280 master-0 kubenswrapper[4167]: I0217 15:02:36.621258 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.621336 master-0 kubenswrapper[4167]: I0217 15:02:36.621289 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.621369 master-0 kubenswrapper[4167]: I0217 15:02:36.621348 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.621399 master-0 kubenswrapper[4167]: I0217 15:02:36.621374 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.621437 master-0 kubenswrapper[4167]: I0217 15:02:36.621400 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.621437 master-0 kubenswrapper[4167]: I0217 15:02:36.621426 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621448 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621489 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621513 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621539 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621561 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.621578 master-0 kubenswrapper[4167]: I0217 15:02:36.621579 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.621732 master-0 kubenswrapper[4167]: I0217 15:02:36.621597 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.621732 master-0 kubenswrapper[4167]: I0217 15:02:36.621646 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.621782 master-0 kubenswrapper[4167]: I0217 15:02:36.621374 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:02:36.622806 master-0 kubenswrapper[4167]: I0217 15:02:36.622004 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:02:36.622806 master-0 kubenswrapper[4167]: I0217 15:02:36.622102 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph"] Feb 17 15:02:36.623841 master-0 kubenswrapper[4167]: I0217 15:02:36.623782 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9"] Feb 17 15:02:36.722029 master-0 kubenswrapper[4167]: I0217 15:02:36.721978 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.722154 master-0 kubenswrapper[4167]: I0217 15:02:36.722032 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.722154 master-0 kubenswrapper[4167]: I0217 15:02:36.722096 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.722154 master-0 kubenswrapper[4167]: I0217 15:02:36.722141 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.723213 master-0 kubenswrapper[4167]: I0217 15:02:36.722829 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.723213 master-0 kubenswrapper[4167]: I0217 15:02:36.722917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.723213 master-0 kubenswrapper[4167]: I0217 15:02:36.723159 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.723449 master-0 kubenswrapper[4167]: I0217 15:02:36.723395 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.723677 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.723677 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.723438 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724102 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724123 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724144 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724160 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: E0217 15:02:36.724251 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724241 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: E0217 15:02:36.724318 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.224298649 +0000 UTC m=+149.758963461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724343 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724391 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724431 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724486 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.733393 master-0 kubenswrapper[4167]: I0217 15:02:36.724501 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724807 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724856 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724908 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724935 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.724973 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725005 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725054 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725086 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725113 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725146 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725194 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725221 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725254 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.734511 master-0 kubenswrapper[4167]: I0217 15:02:36.725278 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.725300 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.725628 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.725894 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726083 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726136 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726169 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726208 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726258 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726289 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726313 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726339 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726366 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726394 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726418 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.735350 master-0 kubenswrapper[4167]: I0217 15:02:36.726441 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.726482 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: E0217 15:02:36.726596 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: E0217 15:02:36.726642 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.226628464 +0000 UTC m=+149.761293266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727014 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.726494 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727308 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727352 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727389 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727440 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727506 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727545 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727581 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727618 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727653 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.736372 master-0 kubenswrapper[4167]: I0217 15:02:36.727696 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727734 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727784 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727820 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727853 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727892 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727921 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727960 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.727995 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728020 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728041 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728066 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728101 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728134 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728164 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.737521 master-0 kubenswrapper[4167]: I0217 15:02:36.728196 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.728227 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.728257 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.728445 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.728514 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.22849905 +0000 UTC m=+149.763163862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.728572 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.728602 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.228592562 +0000 UTC m=+149.763257384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.728749 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.729226 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.729287 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.229267298 +0000 UTC m=+149.763932100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.729349 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.729380 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.22937251 +0000 UTC m=+149.764037312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.729406 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.729539 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: I0217 15:02:36.729646 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.738351 master-0 kubenswrapper[4167]: E0217 15:02:36.729776 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.729867 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.229821041 +0000 UTC m=+149.764485973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.729878 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.730001 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.229966695 +0000 UTC m=+149.764631687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.730137 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.730643 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.730689 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.730774 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.730821 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.230807384 +0000 UTC m=+149.765472196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.731249 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: E0217 15:02:36.731314 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.231292577 +0000 UTC m=+149.765957479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.732021 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.732243 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.732552 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.733197 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.739346 master-0 kubenswrapper[4167]: I0217 15:02:36.733334 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.733382 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.733734 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.733824 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: E0217 15:02:36.733903 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: E0217 15:02:36.733948 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:37.23393503 +0000 UTC m=+149.768600042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.734151 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.735157 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.736000 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.739146 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.740850 master-0 kubenswrapper[4167]: I0217 15:02:36.739830 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.744019 master-0 kubenswrapper[4167]: I0217 15:02:36.743968 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.752190 master-0 kubenswrapper[4167]: I0217 15:02:36.750798 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.752190 master-0 kubenswrapper[4167]: I0217 15:02:36.751517 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.752190 master-0 kubenswrapper[4167]: I0217 15:02:36.751594 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.755430 master-0 kubenswrapper[4167]: I0217 15:02:36.755381 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:36.755590 master-0 kubenswrapper[4167]: I0217 15:02:36.755451 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:36.756338 master-0 kubenswrapper[4167]: I0217 15:02:36.755704 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.756338 master-0 kubenswrapper[4167]: I0217 15:02:36.755930 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.758027 master-0 kubenswrapper[4167]: I0217 15:02:36.757950 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:36.758156 master-0 kubenswrapper[4167]: I0217 15:02:36.758042 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.758795 master-0 kubenswrapper[4167]: I0217 15:02:36.758750 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:36.759928 master-0 kubenswrapper[4167]: I0217 15:02:36.733136 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.760084 master-0 kubenswrapper[4167]: I0217 15:02:36.760061 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.760363 master-0 kubenswrapper[4167]: I0217 15:02:36.760276 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.761343 master-0 kubenswrapper[4167]: I0217 15:02:36.760512 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.771218 master-0 kubenswrapper[4167]: I0217 15:02:36.766263 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.771218 master-0 kubenswrapper[4167]: I0217 15:02:36.766571 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:36.771218 master-0 kubenswrapper[4167]: I0217 15:02:36.766844 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:36.771218 master-0 kubenswrapper[4167]: I0217 15:02:36.769355 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:36.771218 master-0 kubenswrapper[4167]: I0217 15:02:36.770425 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:36.771743 master-0 kubenswrapper[4167]: I0217 15:02:36.771709 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:36.772256 master-0 kubenswrapper[4167]: I0217 15:02:36.772226 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:36.774638 master-0 kubenswrapper[4167]: I0217 15:02:36.774592 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.783897 master-0 kubenswrapper[4167]: I0217 15:02:36.783847 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.799189 master-0 kubenswrapper[4167]: I0217 15:02:36.799133 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.818527 master-0 kubenswrapper[4167]: I0217 15:02:36.818437 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:36.826385 master-0 kubenswrapper[4167]: I0217 15:02:36.826343 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:36.855214 master-0 kubenswrapper[4167]: I0217 15:02:36.855162 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:36.855966 master-0 kubenswrapper[4167]: I0217 15:02:36.855915 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:36.860205 master-0 kubenswrapper[4167]: I0217 15:02:36.859771 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:36.892624 master-0 kubenswrapper[4167]: I0217 15:02:36.892552 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:36.905185 master-0 kubenswrapper[4167]: I0217 15:02:36.905123 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:36.937153 master-0 kubenswrapper[4167]: I0217 15:02:36.937087 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:36.941647 master-0 kubenswrapper[4167]: I0217 15:02:36.939963 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:36.944723 master-0 kubenswrapper[4167]: I0217 15:02:36.944687 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:36.952409 master-0 kubenswrapper[4167]: I0217 15:02:36.952355 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:36.961870 master-0 kubenswrapper[4167]: I0217 15:02:36.961832 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:36.969682 master-0 kubenswrapper[4167]: I0217 15:02:36.969622 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:36.973884 master-0 kubenswrapper[4167]: I0217 15:02:36.973799 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:37.010129 master-0 kubenswrapper[4167]: I0217 15:02:36.998416 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:37.010129 master-0 kubenswrapper[4167]: I0217 15:02:37.006557 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:37.018246 master-0 kubenswrapper[4167]: I0217 15:02:37.018193 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:37.033523 master-0 kubenswrapper[4167]: I0217 15:02:37.026281 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:37.060896 master-0 kubenswrapper[4167]: I0217 15:02:37.060424 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p"] Feb 17 15:02:37.080469 master-0 kubenswrapper[4167]: W0217 15:02:37.080252 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d9f008_7777_48fe_85fe_9d54a7bbcea9.slice/crio-2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e WatchSource:0}: Error finding container 2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e: Status 404 returned error can't find the container with id 2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e Feb 17 15:02:37.104070 master-0 kubenswrapper[4167]: I0217 15:02:37.104026 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:37.128256 master-0 kubenswrapper[4167]: I0217 15:02:37.127682 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9"] Feb 17 15:02:37.171761 master-0 kubenswrapper[4167]: I0217 15:02:37.168695 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:37.233654 master-0 kubenswrapper[4167]: I0217 15:02:37.233594 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:37.233729 master-0 kubenswrapper[4167]: I0217 15:02:37.233661 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:37.233729 master-0 kubenswrapper[4167]: I0217 15:02:37.233686 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:37.233729 master-0 kubenswrapper[4167]: I0217 15:02:37.233709 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:37.233820 master-0 kubenswrapper[4167]: I0217 15:02:37.233742 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:37.233820 master-0 kubenswrapper[4167]: I0217 15:02:37.233763 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:37.233820 master-0 kubenswrapper[4167]: I0217 15:02:37.233783 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:37.233820 master-0 kubenswrapper[4167]: I0217 15:02:37.233802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:37.233933 master-0 kubenswrapper[4167]: I0217 15:02:37.233827 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:37.233933 master-0 kubenswrapper[4167]: I0217 15:02:37.233852 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:37.234138 master-0 kubenswrapper[4167]: E0217 15:02:37.234105 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:37.234199 master-0 kubenswrapper[4167]: E0217 15:02:37.234183 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234161243 +0000 UTC m=+150.768826045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:37.234406 master-0 kubenswrapper[4167]: E0217 15:02:37.234364 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:37.234507 master-0 kubenswrapper[4167]: E0217 15:02:37.234404 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:37.234507 master-0 kubenswrapper[4167]: E0217 15:02:37.234481 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:37.234603 master-0 kubenswrapper[4167]: E0217 15:02:37.234410 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234398248 +0000 UTC m=+150.769063050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:37.234603 master-0 kubenswrapper[4167]: E0217 15:02:37.234514 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:37.234603 master-0 kubenswrapper[4167]: E0217 15:02:37.234591 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234534421 +0000 UTC m=+150.769199223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:37.234728 master-0 kubenswrapper[4167]: E0217 15:02:37.234670 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234655944 +0000 UTC m=+150.769320956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:37.234728 master-0 kubenswrapper[4167]: E0217 15:02:37.234692 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234683945 +0000 UTC m=+150.769348977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:37.234728 master-0 kubenswrapper[4167]: E0217 15:02:37.234695 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:37.234803 master-0 kubenswrapper[4167]: E0217 15:02:37.234753 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:37.234803 master-0 kubenswrapper[4167]: E0217 15:02:37.234787 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234764697 +0000 UTC m=+150.769429689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:37.234862 master-0 kubenswrapper[4167]: E0217 15:02:37.234827 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.234804008 +0000 UTC m=+150.769469030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:37.236657 master-0 kubenswrapper[4167]: E0217 15:02:37.236625 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:37.236699 master-0 kubenswrapper[4167]: E0217 15:02:37.236661 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:37.236699 master-0 kubenswrapper[4167]: E0217 15:02:37.236690 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.236677613 +0000 UTC m=+150.771342415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:37.236755 master-0 kubenswrapper[4167]: E0217 15:02:37.236712 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.236697984 +0000 UTC m=+150.771362986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:37.236858 master-0 kubenswrapper[4167]: E0217 15:02:37.236825 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:37.236930 master-0 kubenswrapper[4167]: E0217 15:02:37.236907 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.236880448 +0000 UTC m=+150.771545450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:37.335358 master-0 kubenswrapper[4167]: I0217 15:02:37.334830 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:37.335358 master-0 kubenswrapper[4167]: E0217 15:02:37.335059 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:37.335358 master-0 kubenswrapper[4167]: E0217 15:02:37.335103 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:38.335087562 +0000 UTC m=+150.869752364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:37.353966 master-0 kubenswrapper[4167]: I0217 15:02:37.353871 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8"] Feb 17 15:02:37.355987 master-0 kubenswrapper[4167]: I0217 15:02:37.355873 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph"] Feb 17 15:02:37.358202 master-0 kubenswrapper[4167]: W0217 15:02:37.358160 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b167b7b_2280_4c82_ac78_71c57aebe503.slice/crio-90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5 WatchSource:0}: Error finding container 90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5: Status 404 returned error can't find the container with id 90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5 Feb 17 15:02:37.361158 master-0 kubenswrapper[4167]: W0217 15:02:37.361110 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c58265d_32fb_4cf0_97d8_6c9a5d37fad9.slice/crio-b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd WatchSource:0}: Error finding container b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd: Status 404 returned error can't find the container with id b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd Feb 17 15:02:37.396374 master-0 kubenswrapper[4167]: I0217 15:02:37.396323 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj"] Feb 17 15:02:37.400467 master-0 kubenswrapper[4167]: W0217 15:02:37.400428 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod801742a6_3735_4883_9676_e852dc4173d2.slice/crio-798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a WatchSource:0}: Error finding container 798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a: Status 404 returned error can't find the container with id 798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a Feb 17 15:02:37.433786 master-0 kubenswrapper[4167]: I0217 15:02:37.433742 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv"] Feb 17 15:02:37.446666 master-0 kubenswrapper[4167]: W0217 15:02:37.446637 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode259b5a1_837b_4cde_85f7_cd5781af08bd.slice/crio-509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c WatchSource:0}: Error finding container 509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c: Status 404 returned error can't find the container with id 509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c Feb 17 15:02:37.453835 master-0 kubenswrapper[4167]: I0217 15:02:37.453801 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89"] Feb 17 15:02:37.457528 master-0 kubenswrapper[4167]: I0217 15:02:37.457291 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk"] Feb 17 15:02:37.460255 master-0 kubenswrapper[4167]: W0217 15:02:37.460205 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c734c89_515e_4ff0_82d1_831ddaf0b99e.slice/crio-af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226 WatchSource:0}: Error finding container af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226: Status 404 returned error can't find the container with id af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226 Feb 17 15:02:37.461852 master-0 kubenswrapper[4167]: W0217 15:02:37.461831 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7ed6ff7_56ba_4806_9e09_b8ae6d79cfda.slice/crio-68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94 WatchSource:0}: Error finding container 68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94: Status 404 returned error can't find the container with id 68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94 Feb 17 15:02:37.514119 master-0 kubenswrapper[4167]: I0217 15:02:37.513998 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-jrdqm"] Feb 17 15:02:37.514119 master-0 kubenswrapper[4167]: I0217 15:02:37.514044 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9"] Feb 17 15:02:37.520986 master-0 kubenswrapper[4167]: I0217 15:02:37.520095 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs"] Feb 17 15:02:37.520986 master-0 kubenswrapper[4167]: W0217 15:02:37.520615 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod553d4535_9985_47e2_83ee_8fcfb6035e7b.slice/crio-c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5 WatchSource:0}: Error finding container c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5: Status 404 returned error can't find the container with id c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5 Feb 17 15:02:37.523829 master-0 kubenswrapper[4167]: W0217 15:02:37.523792 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b3f722_fb34_4ff5_b28b_fc24f43d85ae.slice/crio-b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045 WatchSource:0}: Error finding container b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045: Status 404 returned error can't find the container with id b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045 Feb 17 15:02:37.526002 master-0 kubenswrapper[4167]: W0217 15:02:37.525676 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61d90bf3_02df_48c8_b2ec_09a1653b0800.slice/crio-ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183 WatchSource:0}: Error finding container ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183: Status 404 returned error can't find the container with id ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183 Feb 17 15:02:37.532559 master-0 kubenswrapper[4167]: I0217 15:02:37.530878 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n"] Feb 17 15:02:37.537610 master-0 kubenswrapper[4167]: W0217 15:02:37.537426 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2546ffc_8d0a_4010_a3bd_9e69b6dbea40.slice/crio-c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d WatchSource:0}: Error finding container c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d: Status 404 returned error can't find the container with id c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d Feb 17 15:02:37.941109 master-0 kubenswrapper[4167]: I0217 15:02:37.941037 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045"} Feb 17 15:02:37.942242 master-0 kubenswrapper[4167]: I0217 15:02:37.942205 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94"} Feb 17 15:02:37.943356 master-0 kubenswrapper[4167]: I0217 15:02:37.943327 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d"} Feb 17 15:02:37.944370 master-0 kubenswrapper[4167]: I0217 15:02:37.944340 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5"} Feb 17 15:02:37.945958 master-0 kubenswrapper[4167]: I0217 15:02:37.945928 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"8e1472c1d1be3f277a2b834719c46bd320c628415b71f468a2bd1ad63cb18ee3"} Feb 17 15:02:37.945958 master-0 kubenswrapper[4167]: I0217 15:02:37.945955 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c"} Feb 17 15:02:37.946962 master-0 kubenswrapper[4167]: I0217 15:02:37.946939 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5"} Feb 17 15:02:37.948560 master-0 kubenswrapper[4167]: I0217 15:02:37.948526 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e"} Feb 17 15:02:37.950048 master-0 kubenswrapper[4167]: I0217 15:02:37.950013 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5"} Feb 17 15:02:37.950965 master-0 kubenswrapper[4167]: I0217 15:02:37.950898 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerStarted","Data":"798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a"} Feb 17 15:02:37.951935 master-0 kubenswrapper[4167]: I0217 15:02:37.951877 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd"} Feb 17 15:02:37.953331 master-0 kubenswrapper[4167]: I0217 15:02:37.953306 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226"} Feb 17 15:02:37.954769 master-0 kubenswrapper[4167]: I0217 15:02:37.954728 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-v2h9q" event={"ID":"632fa4c3-b717-432c-8c5f-8d809f69c48b","Type":"ContainerStarted","Data":"6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74"} Feb 17 15:02:37.956101 master-0 kubenswrapper[4167]: I0217 15:02:37.956069 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183"} Feb 17 15:02:37.961299 master-0 kubenswrapper[4167]: I0217 15:02:37.961223 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" podStartSLOduration=113.961208743 podStartE2EDuration="1m53.961208743s" podCreationTimestamp="2026-02-17 15:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:02:37.9598253 +0000 UTC m=+150.494490102" watchObservedRunningTime="2026-02-17 15:02:37.961208743 +0000 UTC m=+150.495873555" Feb 17 15:02:38.244012 master-0 kubenswrapper[4167]: I0217 15:02:38.243917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:38.244012 master-0 kubenswrapper[4167]: I0217 15:02:38.243964 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:38.244012 master-0 kubenswrapper[4167]: I0217 15:02:38.243986 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: E0217 15:02:38.244103 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: E0217 15:02:38.244107 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: I0217 15:02:38.244131 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: E0217 15:02:38.244163 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244145296 +0000 UTC m=+152.778810098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: E0217 15:02:38.244190 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244171957 +0000 UTC m=+152.778836759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: I0217 15:02:38.244217 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:38.244258 master-0 kubenswrapper[4167]: E0217 15:02:38.244222 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244251 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244285 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244269 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244261659 +0000 UTC m=+152.778926461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244345 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.24433204 +0000 UTC m=+152.778996912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: I0217 15:02:38.244375 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244422 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: I0217 15:02:38.244444 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:38.244482 master-0 kubenswrapper[4167]: E0217 15:02:38.244471 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244446474 +0000 UTC m=+152.779111386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: I0217 15:02:38.244488 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244499 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244484315 +0000 UTC m=+152.779149207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: I0217 15:02:38.244522 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244532 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244549 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: I0217 15:02:38.244553 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244560 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244550986 +0000 UTC m=+152.779215898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244600 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244591707 +0000 UTC m=+152.779256619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244623 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244650 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.244640829 +0000 UTC m=+152.779305741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244673 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:38.245648 master-0 kubenswrapper[4167]: E0217 15:02:38.244708 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.24469941 +0000 UTC m=+152.779364322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:38.345269 master-0 kubenswrapper[4167]: I0217 15:02:38.345173 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:38.345520 master-0 kubenswrapper[4167]: E0217 15:02:38.345353 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:38.345520 master-0 kubenswrapper[4167]: E0217 15:02:38.345485 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:40.345448635 +0000 UTC m=+152.880113437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:40.268631 master-0 kubenswrapper[4167]: I0217 15:02:40.268504 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:40.268631 master-0 kubenswrapper[4167]: I0217 15:02:40.268627 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.268717 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: I0217 15:02:40.268745 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.268802 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.268781116 +0000 UTC m=+156.803445928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: I0217 15:02:40.268837 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: I0217 15:02:40.268902 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.268933 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.268988 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269018 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.26899018 +0000 UTC m=+156.803655042 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269030 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: I0217 15:02:40.268939 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269080 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269044 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269046 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269032442 +0000 UTC m=+156.803697284 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269131445 +0000 UTC m=+156.803796257 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:40.269738 master-0 kubenswrapper[4167]: E0217 15:02:40.269154 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269147385 +0000 UTC m=+156.803812197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: I0217 15:02:40.269218 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269235 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269216057 +0000 UTC m=+156.803880899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269271 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: I0217 15:02:40.269274 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269295 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269288248 +0000 UTC m=+156.803953060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: I0217 15:02:40.269336 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269368 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: I0217 15:02:40.269395 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269427 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269403791 +0000 UTC m=+156.804068643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269622 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269678 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269723 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269695428 +0000 UTC m=+156.804360260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:40.270617 master-0 kubenswrapper[4167]: E0217 15:02:40.269751 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.269737649 +0000 UTC m=+156.804402491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:40.374023 master-0 kubenswrapper[4167]: I0217 15:02:40.373892 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:40.374339 master-0 kubenswrapper[4167]: E0217 15:02:40.374162 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:40.374339 master-0 kubenswrapper[4167]: E0217 15:02:40.374282 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:44.374251795 +0000 UTC m=+156.908916777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:43.776493 master-0 kubenswrapper[4167]: I0217 15:02:43.775519 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:44.321785 master-0 kubenswrapper[4167]: I0217 15:02:44.321717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:44.321785 master-0 kubenswrapper[4167]: I0217 15:02:44.321789 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:44.322079 master-0 kubenswrapper[4167]: E0217 15:02:44.321965 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:44.322079 master-0 kubenswrapper[4167]: I0217 15:02:44.322020 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:44.322079 master-0 kubenswrapper[4167]: E0217 15:02:44.322051 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.32202674 +0000 UTC m=+164.856691552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: I0217 15:02:44.322083 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: I0217 15:02:44.322123 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: I0217 15:02:44.322155 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: E0217 15:02:44.322194 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: E0217 15:02:44.322270 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:44.322287 master-0 kubenswrapper[4167]: E0217 15:02:44.322284 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322265476 +0000 UTC m=+164.856930328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: I0217 15:02:44.322207 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322302 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322293587 +0000 UTC m=+164.856958399 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322349 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322352 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322387 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322397 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322389539 +0000 UTC m=+164.857054441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322302 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: I0217 15:02:44.322400 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322416 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322407559 +0000 UTC m=+164.857072371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322430 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322425181 +0000 UTC m=+164.857089983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: I0217 15:02:44.322470 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322504 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322497842 +0000 UTC m=+164.857162644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322506 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322566 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:44.322637 master-0 kubenswrapper[4167]: E0217 15:02:44.322568 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322546314 +0000 UTC m=+164.857211186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:44.323488 master-0 kubenswrapper[4167]: I0217 15:02:44.322628 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:44.323488 master-0 kubenswrapper[4167]: E0217 15:02:44.322679 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322663716 +0000 UTC m=+164.857328528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:44.323488 master-0 kubenswrapper[4167]: E0217 15:02:44.322693 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:44.323488 master-0 kubenswrapper[4167]: E0217 15:02:44.322717 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.322710177 +0000 UTC m=+164.857374979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:44.424154 master-0 kubenswrapper[4167]: I0217 15:02:44.424100 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:44.424345 master-0 kubenswrapper[4167]: E0217 15:02:44.424217 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:44.424345 master-0 kubenswrapper[4167]: E0217 15:02:44.424291 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.424273102 +0000 UTC m=+164.958938004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:47.151725 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 17 15:02:47.178986 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 17 15:02:47.179355 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 17 15:02:47.181284 master-0 systemd[1]: kubelet.service: Consumed 10.568s CPU time. Feb 17 15:02:47.202573 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:02:47.289433 master-0 kubenswrapper[8018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:02:47.290730 master-0 kubenswrapper[8018]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:02:47.290798 master-0 kubenswrapper[8018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:02:47.290993 master-0 kubenswrapper[8018]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:02:47.291053 master-0 kubenswrapper[8018]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:02:47.291106 master-0 kubenswrapper[8018]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:02:47.291364 master-0 kubenswrapper[8018]: I0217 15:02:47.291257 8018 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:02:47.293982 master-0 kubenswrapper[8018]: W0217 15:02:47.293966 8018 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:02:47.294082 master-0 kubenswrapper[8018]: W0217 15:02:47.294071 8018 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:02:47.294148 master-0 kubenswrapper[8018]: W0217 15:02:47.294139 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:02:47.294210 master-0 kubenswrapper[8018]: W0217 15:02:47.294200 8018 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:02:47.294272 master-0 kubenswrapper[8018]: W0217 15:02:47.294263 8018 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:02:47.294334 master-0 kubenswrapper[8018]: W0217 15:02:47.294325 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:02:47.294393 master-0 kubenswrapper[8018]: W0217 15:02:47.294383 8018 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:02:47.294476 master-0 kubenswrapper[8018]: W0217 15:02:47.294449 8018 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:02:47.294549 master-0 kubenswrapper[8018]: W0217 15:02:47.294538 8018 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:02:47.294610 master-0 kubenswrapper[8018]: W0217 15:02:47.294601 8018 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:02:47.294669 master-0 kubenswrapper[8018]: W0217 15:02:47.294660 8018 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:02:47.294730 master-0 kubenswrapper[8018]: W0217 15:02:47.294721 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:02:47.294797 master-0 kubenswrapper[8018]: W0217 15:02:47.294787 8018 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:02:47.294864 master-0 kubenswrapper[8018]: W0217 15:02:47.294855 8018 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:02:47.294924 master-0 kubenswrapper[8018]: W0217 15:02:47.294915 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:02:47.294986 master-0 kubenswrapper[8018]: W0217 15:02:47.294976 8018 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:02:47.295051 master-0 kubenswrapper[8018]: W0217 15:02:47.295041 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:02:47.295111 master-0 kubenswrapper[8018]: W0217 15:02:47.295101 8018 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:02:47.295175 master-0 kubenswrapper[8018]: W0217 15:02:47.295166 8018 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:02:47.295234 master-0 kubenswrapper[8018]: W0217 15:02:47.295224 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:02:47.295296 master-0 kubenswrapper[8018]: W0217 15:02:47.295286 8018 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:02:47.295356 master-0 kubenswrapper[8018]: W0217 15:02:47.295348 8018 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:02:47.295417 master-0 kubenswrapper[8018]: W0217 15:02:47.295407 8018 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:02:47.295492 master-0 kubenswrapper[8018]: W0217 15:02:47.295481 8018 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:02:47.295566 master-0 kubenswrapper[8018]: W0217 15:02:47.295554 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:02:47.295632 master-0 kubenswrapper[8018]: W0217 15:02:47.295621 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:02:47.295689 master-0 kubenswrapper[8018]: W0217 15:02:47.295680 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:02:47.295756 master-0 kubenswrapper[8018]: W0217 15:02:47.295746 8018 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:02:47.295819 master-0 kubenswrapper[8018]: W0217 15:02:47.295809 8018 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:02:47.295882 master-0 kubenswrapper[8018]: W0217 15:02:47.295871 8018 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:02:47.295944 master-0 kubenswrapper[8018]: W0217 15:02:47.295934 8018 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:02:47.296011 master-0 kubenswrapper[8018]: W0217 15:02:47.296000 8018 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:02:47.296076 master-0 kubenswrapper[8018]: W0217 15:02:47.296066 8018 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:02:47.296140 master-0 kubenswrapper[8018]: W0217 15:02:47.296130 8018 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:02:47.296203 master-0 kubenswrapper[8018]: W0217 15:02:47.296192 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:02:47.296270 master-0 kubenswrapper[8018]: W0217 15:02:47.296261 8018 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:02:47.296330 master-0 kubenswrapper[8018]: W0217 15:02:47.296320 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:02:47.296392 master-0 kubenswrapper[8018]: W0217 15:02:47.296382 8018 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:02:47.296474 master-0 kubenswrapper[8018]: W0217 15:02:47.296446 8018 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:02:47.296545 master-0 kubenswrapper[8018]: W0217 15:02:47.296534 8018 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:02:47.296604 master-0 kubenswrapper[8018]: W0217 15:02:47.296595 8018 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:02:47.296665 master-0 kubenswrapper[8018]: W0217 15:02:47.296656 8018 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:02:47.296724 master-0 kubenswrapper[8018]: W0217 15:02:47.296715 8018 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:02:47.296784 master-0 kubenswrapper[8018]: W0217 15:02:47.296774 8018 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:02:47.296844 master-0 kubenswrapper[8018]: W0217 15:02:47.296835 8018 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:02:47.296981 master-0 kubenswrapper[8018]: W0217 15:02:47.296971 8018 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:02:47.297043 master-0 kubenswrapper[8018]: W0217 15:02:47.297034 8018 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:02:47.297091 master-0 kubenswrapper[8018]: W0217 15:02:47.297084 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:02:47.297138 master-0 kubenswrapper[8018]: W0217 15:02:47.297130 8018 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:02:47.297184 master-0 kubenswrapper[8018]: W0217 15:02:47.297176 8018 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:02:47.297228 master-0 kubenswrapper[8018]: W0217 15:02:47.297221 8018 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:02:47.297277 master-0 kubenswrapper[8018]: W0217 15:02:47.297269 8018 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:02:47.297323 master-0 kubenswrapper[8018]: W0217 15:02:47.297315 8018 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:02:47.297371 master-0 kubenswrapper[8018]: W0217 15:02:47.297363 8018 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:02:47.297437 master-0 kubenswrapper[8018]: W0217 15:02:47.297428 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:02:47.297505 master-0 kubenswrapper[8018]: W0217 15:02:47.297496 8018 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:02:47.297552 master-0 kubenswrapper[8018]: W0217 15:02:47.297544 8018 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:02:47.297593 master-0 kubenswrapper[8018]: W0217 15:02:47.297586 8018 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:02:47.297640 master-0 kubenswrapper[8018]: W0217 15:02:47.297633 8018 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:02:47.297684 master-0 kubenswrapper[8018]: W0217 15:02:47.297677 8018 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:02:47.297729 master-0 kubenswrapper[8018]: W0217 15:02:47.297721 8018 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:02:47.297774 master-0 kubenswrapper[8018]: W0217 15:02:47.297766 8018 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:02:47.297829 master-0 kubenswrapper[8018]: W0217 15:02:47.297821 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:02:47.297876 master-0 kubenswrapper[8018]: W0217 15:02:47.297869 8018 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:02:47.297920 master-0 kubenswrapper[8018]: W0217 15:02:47.297913 8018 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:02:47.297970 master-0 kubenswrapper[8018]: W0217 15:02:47.297962 8018 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:02:47.298018 master-0 kubenswrapper[8018]: W0217 15:02:47.298010 8018 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:02:47.298065 master-0 kubenswrapper[8018]: W0217 15:02:47.298057 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:02:47.298110 master-0 kubenswrapper[8018]: W0217 15:02:47.298103 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:02:47.298155 master-0 kubenswrapper[8018]: W0217 15:02:47.298148 8018 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:02:47.298202 master-0 kubenswrapper[8018]: W0217 15:02:47.298194 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:02:47.298247 master-0 kubenswrapper[8018]: W0217 15:02:47.298239 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:02:47.298439 master-0 kubenswrapper[8018]: I0217 15:02:47.298421 8018 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:02:47.298525 master-0 kubenswrapper[8018]: I0217 15:02:47.298513 8018 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:02:47.298578 master-0 kubenswrapper[8018]: I0217 15:02:47.298568 8018 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:02:47.298627 master-0 kubenswrapper[8018]: I0217 15:02:47.298617 8018 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:02:47.298674 master-0 kubenswrapper[8018]: I0217 15:02:47.298666 8018 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:02:47.298726 master-0 kubenswrapper[8018]: I0217 15:02:47.298716 8018 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:02:47.298776 master-0 kubenswrapper[8018]: I0217 15:02:47.298765 8018 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:02:47.298827 master-0 kubenswrapper[8018]: I0217 15:02:47.298818 8018 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:02:47.298874 master-0 kubenswrapper[8018]: I0217 15:02:47.298866 8018 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:02:47.298923 master-0 kubenswrapper[8018]: I0217 15:02:47.298914 8018 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:02:47.298969 master-0 kubenswrapper[8018]: I0217 15:02:47.298961 8018 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:02:47.299016 master-0 kubenswrapper[8018]: I0217 15:02:47.299008 8018 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:02:47.299064 master-0 kubenswrapper[8018]: I0217 15:02:47.299056 8018 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:02:47.299113 master-0 kubenswrapper[8018]: I0217 15:02:47.299105 8018 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:02:47.299162 master-0 kubenswrapper[8018]: I0217 15:02:47.299154 8018 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:02:47.299208 master-0 kubenswrapper[8018]: I0217 15:02:47.299201 8018 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:02:47.299255 master-0 kubenswrapper[8018]: I0217 15:02:47.299247 8018 flags.go:64] FLAG: --cloud-config="" Feb 17 15:02:47.299306 master-0 kubenswrapper[8018]: I0217 15:02:47.299298 8018 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:02:47.299355 master-0 kubenswrapper[8018]: I0217 15:02:47.299344 8018 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:02:47.299402 master-0 kubenswrapper[8018]: I0217 15:02:47.299394 8018 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:02:47.299448 master-0 kubenswrapper[8018]: I0217 15:02:47.299440 8018 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:02:47.299517 master-0 kubenswrapper[8018]: I0217 15:02:47.299508 8018 flags.go:64] FLAG: --config-dir="" Feb 17 15:02:47.299568 master-0 kubenswrapper[8018]: I0217 15:02:47.299560 8018 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:02:47.299613 master-0 kubenswrapper[8018]: I0217 15:02:47.299603 8018 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:02:47.299656 master-0 kubenswrapper[8018]: I0217 15:02:47.299648 8018 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:02:47.299702 master-0 kubenswrapper[8018]: I0217 15:02:47.299695 8018 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:02:47.299746 master-0 kubenswrapper[8018]: I0217 15:02:47.299738 8018 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:02:47.299793 master-0 kubenswrapper[8018]: I0217 15:02:47.299785 8018 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:02:47.299840 master-0 kubenswrapper[8018]: I0217 15:02:47.299832 8018 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:02:47.299941 master-0 kubenswrapper[8018]: I0217 15:02:47.299932 8018 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:02:47.299990 master-0 kubenswrapper[8018]: I0217 15:02:47.299982 8018 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:02:47.300037 master-0 kubenswrapper[8018]: I0217 15:02:47.300029 8018 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:02:47.300093 master-0 kubenswrapper[8018]: I0217 15:02:47.300083 8018 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:02:47.300140 master-0 kubenswrapper[8018]: I0217 15:02:47.300131 8018 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:02:47.300185 master-0 kubenswrapper[8018]: I0217 15:02:47.300177 8018 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:02:47.300232 master-0 kubenswrapper[8018]: I0217 15:02:47.300224 8018 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:02:47.300278 master-0 kubenswrapper[8018]: I0217 15:02:47.300270 8018 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:02:47.300338 master-0 kubenswrapper[8018]: I0217 15:02:47.300326 8018 flags.go:64] FLAG: --enable-server="true" Feb 17 15:02:47.300401 master-0 kubenswrapper[8018]: I0217 15:02:47.300389 8018 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:02:47.300467 master-0 kubenswrapper[8018]: I0217 15:02:47.300445 8018 flags.go:64] FLAG: --event-burst="100" Feb 17 15:02:47.300527 master-0 kubenswrapper[8018]: I0217 15:02:47.300517 8018 flags.go:64] FLAG: --event-qps="50" Feb 17 15:02:47.300575 master-0 kubenswrapper[8018]: I0217 15:02:47.300567 8018 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:02:47.300624 master-0 kubenswrapper[8018]: I0217 15:02:47.300616 8018 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:02:47.300672 master-0 kubenswrapper[8018]: I0217 15:02:47.300662 8018 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:02:47.300721 master-0 kubenswrapper[8018]: I0217 15:02:47.300713 8018 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:02:47.300768 master-0 kubenswrapper[8018]: I0217 15:02:47.300760 8018 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:02:47.300815 master-0 kubenswrapper[8018]: I0217 15:02:47.300807 8018 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:02:47.300863 master-0 kubenswrapper[8018]: I0217 15:02:47.300854 8018 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:02:47.300913 master-0 kubenswrapper[8018]: I0217 15:02:47.300904 8018 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:02:47.300960 master-0 kubenswrapper[8018]: I0217 15:02:47.300952 8018 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:02:47.301007 master-0 kubenswrapper[8018]: I0217 15:02:47.300999 8018 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:02:47.301060 master-0 kubenswrapper[8018]: I0217 15:02:47.301052 8018 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:02:47.301106 master-0 kubenswrapper[8018]: I0217 15:02:47.301098 8018 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:02:47.301152 master-0 kubenswrapper[8018]: I0217 15:02:47.301144 8018 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:02:47.301199 master-0 kubenswrapper[8018]: I0217 15:02:47.301190 8018 flags.go:64] FLAG: --feature-gates="" Feb 17 15:02:47.301253 master-0 kubenswrapper[8018]: I0217 15:02:47.301244 8018 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:02:47.301296 master-0 kubenswrapper[8018]: I0217 15:02:47.301288 8018 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:02:47.301339 master-0 kubenswrapper[8018]: I0217 15:02:47.301331 8018 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:02:47.301389 master-0 kubenswrapper[8018]: I0217 15:02:47.301381 8018 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:02:47.301478 master-0 kubenswrapper[8018]: I0217 15:02:47.301468 8018 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:02:47.301537 master-0 kubenswrapper[8018]: I0217 15:02:47.301528 8018 flags.go:64] FLAG: --help="false" Feb 17 15:02:47.301588 master-0 kubenswrapper[8018]: I0217 15:02:47.301580 8018 flags.go:64] FLAG: --hostname-override="" Feb 17 15:02:47.301642 master-0 kubenswrapper[8018]: I0217 15:02:47.301633 8018 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:02:47.301689 master-0 kubenswrapper[8018]: I0217 15:02:47.301681 8018 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:02:47.301735 master-0 kubenswrapper[8018]: I0217 15:02:47.301727 8018 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:02:47.301780 master-0 kubenswrapper[8018]: I0217 15:02:47.301772 8018 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:02:47.301832 master-0 kubenswrapper[8018]: I0217 15:02:47.301824 8018 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:02:47.301874 master-0 kubenswrapper[8018]: I0217 15:02:47.301867 8018 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:02:47.301920 master-0 kubenswrapper[8018]: I0217 15:02:47.301912 8018 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:02:47.301969 master-0 kubenswrapper[8018]: I0217 15:02:47.301961 8018 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:02:47.302015 master-0 kubenswrapper[8018]: I0217 15:02:47.302007 8018 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:02:47.302059 master-0 kubenswrapper[8018]: I0217 15:02:47.302051 8018 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:02:47.302112 master-0 kubenswrapper[8018]: I0217 15:02:47.302101 8018 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:02:47.302179 master-0 kubenswrapper[8018]: I0217 15:02:47.302168 8018 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:02:47.302233 master-0 kubenswrapper[8018]: I0217 15:02:47.302224 8018 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:02:47.302278 master-0 kubenswrapper[8018]: I0217 15:02:47.302270 8018 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:02:47.302330 master-0 kubenswrapper[8018]: I0217 15:02:47.302322 8018 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:02:47.302378 master-0 kubenswrapper[8018]: I0217 15:02:47.302369 8018 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:02:47.302421 master-0 kubenswrapper[8018]: I0217 15:02:47.302413 8018 flags.go:64] FLAG: --lock-file="" Feb 17 15:02:47.302484 master-0 kubenswrapper[8018]: I0217 15:02:47.302475 8018 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:02:47.302532 master-0 kubenswrapper[8018]: I0217 15:02:47.302524 8018 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:02:47.302587 master-0 kubenswrapper[8018]: I0217 15:02:47.302575 8018 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:02:47.302646 master-0 kubenswrapper[8018]: I0217 15:02:47.302626 8018 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:02:47.302702 master-0 kubenswrapper[8018]: I0217 15:02:47.302695 8018 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:02:47.302754 master-0 kubenswrapper[8018]: I0217 15:02:47.302745 8018 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:02:47.302801 master-0 kubenswrapper[8018]: I0217 15:02:47.302793 8018 flags.go:64] FLAG: --logging-format="text" Feb 17 15:02:47.302848 master-0 kubenswrapper[8018]: I0217 15:02:47.302840 8018 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:02:47.302894 master-0 kubenswrapper[8018]: I0217 15:02:47.302886 8018 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:02:47.302942 master-0 kubenswrapper[8018]: I0217 15:02:47.302933 8018 flags.go:64] FLAG: --manifest-url="" Feb 17 15:02:47.302997 master-0 kubenswrapper[8018]: I0217 15:02:47.302987 8018 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:02:47.303045 master-0 kubenswrapper[8018]: I0217 15:02:47.303037 8018 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:02:47.303097 master-0 kubenswrapper[8018]: I0217 15:02:47.303087 8018 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:02:47.303145 master-0 kubenswrapper[8018]: I0217 15:02:47.303136 8018 flags.go:64] FLAG: --max-pods="110" Feb 17 15:02:47.303188 master-0 kubenswrapper[8018]: I0217 15:02:47.303180 8018 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:02:47.303237 master-0 kubenswrapper[8018]: I0217 15:02:47.303229 8018 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:02:47.303283 master-0 kubenswrapper[8018]: I0217 15:02:47.303275 8018 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:02:47.303329 master-0 kubenswrapper[8018]: I0217 15:02:47.303321 8018 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:02:47.303375 master-0 kubenswrapper[8018]: I0217 15:02:47.303367 8018 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:02:47.303424 master-0 kubenswrapper[8018]: I0217 15:02:47.303416 8018 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 17 15:02:47.303492 master-0 kubenswrapper[8018]: I0217 15:02:47.303475 8018 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:02:47.303552 master-0 kubenswrapper[8018]: I0217 15:02:47.303543 8018 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:02:47.303600 master-0 kubenswrapper[8018]: I0217 15:02:47.303592 8018 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:02:47.303644 master-0 kubenswrapper[8018]: I0217 15:02:47.303636 8018 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:02:47.303690 master-0 kubenswrapper[8018]: I0217 15:02:47.303682 8018 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:02:47.303738 master-0 kubenswrapper[8018]: I0217 15:02:47.303728 8018 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 17 15:02:47.303788 master-0 kubenswrapper[8018]: I0217 15:02:47.303780 8018 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:02:47.303845 master-0 kubenswrapper[8018]: I0217 15:02:47.303835 8018 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:02:47.303896 master-0 kubenswrapper[8018]: I0217 15:02:47.303888 8018 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:02:47.303960 master-0 kubenswrapper[8018]: I0217 15:02:47.303949 8018 flags.go:64] FLAG: --port="10250" Feb 17 15:02:47.304012 master-0 kubenswrapper[8018]: I0217 15:02:47.304004 8018 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:02:47.304060 master-0 kubenswrapper[8018]: I0217 15:02:47.304052 8018 flags.go:64] FLAG: --provider-id="" Feb 17 15:02:47.304103 master-0 kubenswrapper[8018]: I0217 15:02:47.304095 8018 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:02:47.304153 master-0 kubenswrapper[8018]: I0217 15:02:47.304145 8018 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:02:47.304200 master-0 kubenswrapper[8018]: I0217 15:02:47.304192 8018 flags.go:64] FLAG: --register-node="true" Feb 17 15:02:47.304247 master-0 kubenswrapper[8018]: I0217 15:02:47.304239 8018 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:02:47.304301 master-0 kubenswrapper[8018]: I0217 15:02:47.304289 8018 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:02:47.304351 master-0 kubenswrapper[8018]: I0217 15:02:47.304342 8018 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:02:47.304397 master-0 kubenswrapper[8018]: I0217 15:02:47.304389 8018 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:02:47.304443 master-0 kubenswrapper[8018]: I0217 15:02:47.304435 8018 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:02:47.304515 master-0 kubenswrapper[8018]: I0217 15:02:47.304504 8018 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:02:47.304566 master-0 kubenswrapper[8018]: I0217 15:02:47.304556 8018 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:02:47.304614 master-0 kubenswrapper[8018]: I0217 15:02:47.304606 8018 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:02:47.304660 master-0 kubenswrapper[8018]: I0217 15:02:47.304652 8018 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:02:47.304706 master-0 kubenswrapper[8018]: I0217 15:02:47.304698 8018 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:02:47.304748 master-0 kubenswrapper[8018]: I0217 15:02:47.304740 8018 flags.go:64] FLAG: --runonce="false" Feb 17 15:02:47.304794 master-0 kubenswrapper[8018]: I0217 15:02:47.304786 8018 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:02:47.304845 master-0 kubenswrapper[8018]: I0217 15:02:47.304836 8018 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:02:47.304894 master-0 kubenswrapper[8018]: I0217 15:02:47.304886 8018 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:02:47.304940 master-0 kubenswrapper[8018]: I0217 15:02:47.304932 8018 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:02:47.304988 master-0 kubenswrapper[8018]: I0217 15:02:47.304979 8018 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:02:47.305037 master-0 kubenswrapper[8018]: I0217 15:02:47.305029 8018 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:02:47.305086 master-0 kubenswrapper[8018]: I0217 15:02:47.305078 8018 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:02:47.305130 master-0 kubenswrapper[8018]: I0217 15:02:47.305122 8018 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:02:47.305176 master-0 kubenswrapper[8018]: I0217 15:02:47.305168 8018 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:02:47.305225 master-0 kubenswrapper[8018]: I0217 15:02:47.305215 8018 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:02:47.305271 master-0 kubenswrapper[8018]: I0217 15:02:47.305263 8018 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:02:47.305315 master-0 kubenswrapper[8018]: I0217 15:02:47.305306 8018 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:02:47.305478 master-0 kubenswrapper[8018]: I0217 15:02:47.305467 8018 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:02:47.305537 master-0 kubenswrapper[8018]: I0217 15:02:47.305528 8018 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:02:47.305587 master-0 kubenswrapper[8018]: I0217 15:02:47.305575 8018 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 17 15:02:47.305635 master-0 kubenswrapper[8018]: I0217 15:02:47.305626 8018 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:02:47.305682 master-0 kubenswrapper[8018]: I0217 15:02:47.305674 8018 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:02:47.305737 master-0 kubenswrapper[8018]: I0217 15:02:47.305726 8018 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:02:47.305784 master-0 kubenswrapper[8018]: I0217 15:02:47.305776 8018 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:02:47.305832 master-0 kubenswrapper[8018]: I0217 15:02:47.305824 8018 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:02:47.305881 master-0 kubenswrapper[8018]: I0217 15:02:47.305873 8018 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:02:47.305927 master-0 kubenswrapper[8018]: I0217 15:02:47.305919 8018 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:02:47.305974 master-0 kubenswrapper[8018]: I0217 15:02:47.305966 8018 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:02:47.306023 master-0 kubenswrapper[8018]: I0217 15:02:47.306013 8018 flags.go:64] FLAG: --v="2" Feb 17 15:02:47.306073 master-0 kubenswrapper[8018]: I0217 15:02:47.306063 8018 flags.go:64] FLAG: --version="false" Feb 17 15:02:47.306117 master-0 kubenswrapper[8018]: I0217 15:02:47.306108 8018 flags.go:64] FLAG: --vmodule="" Feb 17 15:02:47.306164 master-0 kubenswrapper[8018]: I0217 15:02:47.306155 8018 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:02:47.306211 master-0 kubenswrapper[8018]: I0217 15:02:47.306203 8018 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:02:47.306378 master-0 kubenswrapper[8018]: W0217 15:02:47.306369 8018 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:02:47.306435 master-0 kubenswrapper[8018]: W0217 15:02:47.306427 8018 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:02:47.306502 master-0 kubenswrapper[8018]: W0217 15:02:47.306493 8018 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:02:47.306567 master-0 kubenswrapper[8018]: W0217 15:02:47.306558 8018 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:02:47.306617 master-0 kubenswrapper[8018]: W0217 15:02:47.306609 8018 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:02:47.306665 master-0 kubenswrapper[8018]: W0217 15:02:47.306657 8018 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:02:47.306711 master-0 kubenswrapper[8018]: W0217 15:02:47.306703 8018 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:02:47.306819 master-0 kubenswrapper[8018]: W0217 15:02:47.306810 8018 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:02:47.306866 master-0 kubenswrapper[8018]: W0217 15:02:47.306858 8018 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:02:47.306912 master-0 kubenswrapper[8018]: W0217 15:02:47.306904 8018 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:02:47.306956 master-0 kubenswrapper[8018]: W0217 15:02:47.306949 8018 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:02:47.306997 master-0 kubenswrapper[8018]: W0217 15:02:47.306990 8018 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:02:47.307042 master-0 kubenswrapper[8018]: W0217 15:02:47.307034 8018 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:02:47.307090 master-0 kubenswrapper[8018]: W0217 15:02:47.307083 8018 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:02:47.307135 master-0 kubenswrapper[8018]: W0217 15:02:47.307127 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:02:47.307180 master-0 kubenswrapper[8018]: W0217 15:02:47.307172 8018 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:02:47.307224 master-0 kubenswrapper[8018]: W0217 15:02:47.307217 8018 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:02:47.307269 master-0 kubenswrapper[8018]: W0217 15:02:47.307262 8018 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:02:47.307314 master-0 kubenswrapper[8018]: W0217 15:02:47.307307 8018 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:02:47.307356 master-0 kubenswrapper[8018]: W0217 15:02:47.307349 8018 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:02:47.307404 master-0 kubenswrapper[8018]: W0217 15:02:47.307396 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:02:47.307448 master-0 kubenswrapper[8018]: W0217 15:02:47.307441 8018 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:02:47.307515 master-0 kubenswrapper[8018]: W0217 15:02:47.307507 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:02:47.307559 master-0 kubenswrapper[8018]: W0217 15:02:47.307552 8018 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:02:47.307607 master-0 kubenswrapper[8018]: W0217 15:02:47.307600 8018 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:02:47.307651 master-0 kubenswrapper[8018]: W0217 15:02:47.307644 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:02:47.307707 master-0 kubenswrapper[8018]: W0217 15:02:47.307691 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:02:47.307789 master-0 kubenswrapper[8018]: W0217 15:02:47.307779 8018 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:02:47.307834 master-0 kubenswrapper[8018]: W0217 15:02:47.307827 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:02:47.307881 master-0 kubenswrapper[8018]: W0217 15:02:47.307873 8018 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:02:47.307927 master-0 kubenswrapper[8018]: W0217 15:02:47.307919 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:02:47.307983 master-0 kubenswrapper[8018]: W0217 15:02:47.307974 8018 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:02:47.308032 master-0 kubenswrapper[8018]: W0217 15:02:47.308024 8018 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:02:47.308075 master-0 kubenswrapper[8018]: W0217 15:02:47.308067 8018 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:02:47.308139 master-0 kubenswrapper[8018]: W0217 15:02:47.308129 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:02:47.308190 master-0 kubenswrapper[8018]: W0217 15:02:47.308181 8018 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:02:47.308236 master-0 kubenswrapper[8018]: W0217 15:02:47.308229 8018 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:02:47.308285 master-0 kubenswrapper[8018]: W0217 15:02:47.308277 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:02:47.308342 master-0 kubenswrapper[8018]: W0217 15:02:47.308333 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:02:47.308389 master-0 kubenswrapper[8018]: W0217 15:02:47.308381 8018 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:02:47.308445 master-0 kubenswrapper[8018]: W0217 15:02:47.308436 8018 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:02:47.308554 master-0 kubenswrapper[8018]: W0217 15:02:47.308542 8018 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:02:47.308613 master-0 kubenswrapper[8018]: W0217 15:02:47.308605 8018 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308656 8018 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308663 8018 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308667 8018 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308671 8018 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308675 8018 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308679 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308683 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308686 8018 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308690 8018 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308693 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308697 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308701 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308704 8018 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308708 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308712 8018 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308715 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308719 8018 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308723 8018 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308726 8018 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:02:47.309395 master-0 kubenswrapper[8018]: W0217 15:02:47.308730 8018 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308735 8018 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308739 8018 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308743 8018 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308747 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308751 8018 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308757 8018 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308762 8018 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308767 8018 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: W0217 15:02:47.308772 8018 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:02:47.309969 master-0 kubenswrapper[8018]: I0217 15:02:47.308785 8018 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:02:47.316577 master-0 kubenswrapper[8018]: I0217 15:02:47.316420 8018 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 17 15:02:47.316577 master-0 kubenswrapper[8018]: I0217 15:02:47.316486 8018 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:02:47.316674 master-0 kubenswrapper[8018]: W0217 15:02:47.316645 8018 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:02:47.316674 master-0 kubenswrapper[8018]: W0217 15:02:47.316656 8018 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316680 8018 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316686 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316691 8018 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316696 8018 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316701 8018 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316705 8018 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316714 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316719 8018 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316724 8018 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316729 8018 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:02:47.316727 master-0 kubenswrapper[8018]: W0217 15:02:47.316734 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316759 8018 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316770 8018 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316775 8018 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316780 8018 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316783 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316788 8018 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316791 8018 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316800 8018 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316804 8018 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316809 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316813 8018 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316837 8018 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316843 8018 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316849 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316854 8018 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316858 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316864 8018 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316869 8018 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:02:47.317071 master-0 kubenswrapper[8018]: W0217 15:02:47.316874 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316881 8018 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316885 8018 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316889 8018 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316894 8018 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316916 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316921 8018 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316925 8018 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316929 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316933 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316937 8018 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316941 8018 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316945 8018 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316949 8018 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316957 8018 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316962 8018 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316967 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316971 8018 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316975 8018 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.316999 8018 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:02:47.317616 master-0 kubenswrapper[8018]: W0217 15:02:47.317004 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317008 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317013 8018 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317017 8018 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317023 8018 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317028 8018 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317036 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317040 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317045 8018 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317049 8018 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317072 8018 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317078 8018 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317083 8018 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317087 8018 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317093 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317097 8018 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317102 8018 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317108 8018 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317113 8018 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:02:47.318119 master-0 kubenswrapper[8018]: W0217 15:02:47.317121 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317125 8018 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: I0217 15:02:47.317132 8018 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317491 8018 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317502 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317507 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317512 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317516 8018 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317521 8018 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317527 8018 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317531 8018 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317559 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317564 8018 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317569 8018 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317573 8018 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:02:47.318680 master-0 kubenswrapper[8018]: W0217 15:02:47.317577 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317581 8018 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317586 8018 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317589 8018 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317594 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317598 8018 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317602 8018 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317606 8018 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317614 8018 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317639 8018 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317644 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317650 8018 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317655 8018 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317660 8018 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317664 8018 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317669 8018 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317674 8018 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317678 8018 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317683 8018 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317687 8018 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:02:47.319119 master-0 kubenswrapper[8018]: W0217 15:02:47.317691 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317717 8018 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317723 8018 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317727 8018 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317733 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317737 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317742 8018 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317747 8018 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317751 8018 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317756 8018 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317760 8018 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317764 8018 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317769 8018 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317776 8018 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317800 8018 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317805 8018 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317809 8018 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317813 8018 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317817 8018 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317822 8018 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:02:47.319634 master-0 kubenswrapper[8018]: W0217 15:02:47.317828 8018 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317833 8018 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317839 8018 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317845 8018 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317849 8018 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317876 8018 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317882 8018 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317887 8018 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317892 8018 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317898 8018 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317903 8018 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317908 8018 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317913 8018 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317917 8018 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317922 8018 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317927 8018 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317933 8018 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317956 8018 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317966 8018 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:02:47.320163 master-0 kubenswrapper[8018]: W0217 15:02:47.317972 8018 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:02:47.320715 master-0 kubenswrapper[8018]: I0217 15:02:47.317979 8018 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:02:47.320715 master-0 kubenswrapper[8018]: I0217 15:02:47.318291 8018 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:02:47.320852 master-0 kubenswrapper[8018]: I0217 15:02:47.320794 8018 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 15:02:47.320984 master-0 kubenswrapper[8018]: I0217 15:02:47.320956 8018 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:02:47.321290 master-0 kubenswrapper[8018]: I0217 15:02:47.321263 8018 server.go:997] "Starting client certificate rotation" Feb 17 15:02:47.321290 master-0 kubenswrapper[8018]: I0217 15:02:47.321282 8018 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:02:47.321598 master-0 kubenswrapper[8018]: I0217 15:02:47.321519 8018 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 10:01:45.162285823 +0000 UTC Feb 17 15:02:47.321638 master-0 kubenswrapper[8018]: I0217 15:02:47.321592 8018 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h58m57.840698019s for next certificate rotation Feb 17 15:02:47.321987 master-0 kubenswrapper[8018]: I0217 15:02:47.321959 8018 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:02:47.323154 master-0 kubenswrapper[8018]: I0217 15:02:47.323127 8018 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:02:47.330715 master-0 kubenswrapper[8018]: I0217 15:02:47.330675 8018 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:02:47.333187 master-0 kubenswrapper[8018]: I0217 15:02:47.333147 8018 log.go:25] "Validated CRI v1 image API" Feb 17 15:02:47.339851 master-0 kubenswrapper[8018]: I0217 15:02:47.339796 8018 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:02:47.343443 master-0 kubenswrapper[8018]: I0217 15:02:47.343395 8018 fs.go:135] Filesystem UUIDs: map[4e612f26-a2b1-4cb3-97c9-965b3561529c:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 17 15:02:47.344022 master-0 kubenswrapper[8018]: I0217 15:02:47.343432 8018 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm major:0 minor:307 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/933619de776a30ee8db83753fa79bb4994c3f6de2f880c843e582119c60f8f70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/933619de776a30ee8db83753fa79bb4994c3f6de2f880c843e582119c60f8f70/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm major:0 minor:290 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k:{mountpoint:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf:{mountpoint:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf:{mountpoint:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg:{mountpoint:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874:{mountpoint:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874 major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7:{mountpoint:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7 major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4:{mountpoint:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df:{mountpoint:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4be2df82-c77a-4d26-9498-fa3beea54b81/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4be2df82-c77a-4d26-9498-fa3beea54b81/volumes/kubernetes.io~projected/kube-api-access major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb:{mountpoint:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:67 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx:{mountpoint:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm:{mountpoint:/var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh:{mountpoint:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b25a72d-965f-415c-abc9-09612859e9e0/volumes/kubernetes.io~projected/kube-api-access-fv46m:{mountpoint:/var/lib/kubelet/pods/6b25a72d-965f-415c-abc9-09612859e9e0/volumes/kubernetes.io~projected/kube-api-access-fv46m major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz:{mountpoint:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86:{mountpoint:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86 major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4:{mountpoint:/var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4:{mountpoint:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4 major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg:{mountpoint:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92:{mountpoint:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc:{mountpoint:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr:{mountpoint:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs:{mountpoint:/var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp:{mountpoint:/var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc:{mountpoint:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd:{mountpoint:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd major:0 minor:135 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/3c7388565590a40c584b78d104515d741f26a07e59b36b19d0bd82c63a72123c/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/6f94038febf39a452956cf6d684b14da18abeb7d0994a33650b948c9d3e3c109/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/fa1a4874cb0fb982ee9d5601bc3d91a97190fbdfc2360a693dc9e83198e58558/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/1071fe246ee9434c2bd36560ae29d8f4714476f41c223e9f14f074874d8ebe16/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/4026c00efd3f44958e89789080dec338f35e4f7f91356eab032388f48d6f6a6b/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/1381e4c61fedb03d92430161a0a7167c1044a9d22d292b0121b30073b8755fae/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/3a0d51efbd8891ba67c0f101b87f6dcf11f621d806e8d708b186a0dc9000c58f/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/cc39dc17da7af77e532a6b6efbc24f4649c66e1f5e6ecfce407e3d007075781f/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/89903e5041a65473506bcc325426954c09ded6f9fce7b50dfbda03b522b4b280/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/6a95603e4612482dff25e972dceec05bb60750bd3dd713ba25908cecfe42a54d/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/ee0126693d85c430e4e3d9c3f8c50278691c671dd12296989ba9a97855780654/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/09f4877e7b77ae7ecb4d9e2650faf903417fca3a84e29813d940db588c1447a6/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/786d68f72c2d7a65fbd5d9d12837a704f7cce488da2cd12a6f1c888a486c1a93/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/f9ffa199bcd215052aa3d1dee48a80404cb197b68cc7021e9bd9cd85ec514268/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/56f9a4a309663d310630790f6c94dab19b68402c177a43af52e8d07f1ad7ab1a/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/38eabe8caeb186fc2c1e38f27c4acf956d27b017c6b7800186fe789e7e7f1f33/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/0d99eafa65819d80815e7955776c38a99fc8b9d97d9248521de413ff2e479e71/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/6292c8c4689dbfabae9661552d85c0cd56b88e756dfdc2d7070496641ff34799/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/2a3bc5306bf485bf2329471b431fbc353a6d2870dd91848c48b38815f64dc270/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/05da7dd6459e4dd420df15779dc67c1ae824647d56277082b8a4c7537c54106b/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/3338cee47616986ce41204a5759d425fc0d1772a9a21b507c5d4939115c9eef4/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/7eb87dc43ecc146c90770714b02cb3dbac743c6e56743e7e0b872de7c7f3f00e/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/7ee4f7b7b139adde1384b6833e7f4aa2c693ca563d5ac52669063d6f2ce59e83/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/91477d085c36d0c4bbef229068c7777aff211c08bfe61d2c2b7a6fd9669a7192/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/817d8ac3beeeadb316574614fc8b2afa018f6d234fce08745b4876dcbbb57798/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/726e467ea7ab9b8997bfc1188d0bdb837919bdefaa19e31ffd795215f9b1596a/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/14a4d5be05fe3fbfc44b89b6a3fa071f42ea5193c5b97e6b3ed9a506115bb761/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-218:{mountpoint:/var/lib/containers/storage/overlay/e1a26234ac618944fe9431990a2a29016a7cf90a60ef133d804569e862c40a12/merged major:0 minor:218 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/8f9850e2e8ee4c2be32d1a81a3786eab0a21f4a1ac2b5952d1c9ad99e16725e4/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/40fb1c0e8dbf216295f0191c7c19167e9f4c151f22dd18abcad2c767e86e7a04/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/6c7d1642617967d4e20b9aba1ac8cd1c9421207b96f47ddfef6ccfb30bc05c9e/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/87fd02204b7dcf2f8075f4592b5924f06bd8b18135e61dd8decaac86c8aee221/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/61a5cfba5b8248c052bfb5fe4733377448c1fc4c5bff8a7fde98cf0c89fb35d4/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/1b39d70b5ae9d7811fb527c1bcd17b3875f1f34a7f02fb559d2762aa78410712/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/9568a1e322f499226b1e56b20f6c30350e71fde514aa31da549e6425a889e8d8/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/5315e2d5d089c10012db99e54658c7ff4cf33a95e7c7d7d5769c2c88a5a87f2c/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/ae0a736452489a97d17a33c3893807846da28f615c9321dd7d1ea3595d995a54/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/6807a0aba409cda3ddb6f14a5bf79732ebe887bd51814532a9485f3c03ed7d64/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/4f20f7826eba9f5e353e9b7faa267fcefb85dedb50760e02b8f2fa0e338a2bf8/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/61bf959b2f745b6aa99293d969a8712b2889d186dbfb007db189b9db2eb728be/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/6152fea8d59b9fd818f7d1b8fa3cce3fb487ea09a51d92fa2311f6f37d0ceee0/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/9551628138a78cc083068ea26d624129b51478afacad39fa6532e3cb4b261af3/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/e6fa9dc1e9335833c1cc31069297354a0575b4f77c8683ecd29b750d51cca6d9/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/0ac71b89283e8ab5e6e41eb0d3c0fc34abe7a53f3efcadabe7f37d7c792114bb/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/9d3051a5d15bae819cf3c8e385f5668468aa9f2559e365681bbb84c9ebff96c5/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/26dd1619dd34d197f27fd24e8d322eddbed6fb1c5d5544a62121a5b3e9508561/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/b90d4ea027a76960f33e2a187303fe8b1f9c1120567075fe49234a08ecb53cc9/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/8feba03ae57478d776e828f76b8ee93ec99af21d31c1eff08304db33d08d7ce5/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/c565d1919672befaa983739b9b2cb10a143c3cfe4027a4d38574f38481ee686e/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/d2c0719d0c82210c1d4059581fa1bc1efe1706dc15d0ad95b8ddad5f72d80bcc/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/d5f8aff3d82a59370363cec716fd6419380930e77980867e5c4219dbbaf0bb2b/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/459885ed86e0cc3440014902ca7bb0c4412df2ba44a9114ccf75670ffa5c6299/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/9381ae1b81bbb641692770926cf0b62e30504bad01983a2efea98efca05f5a98/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/74f563cc3dd726a00813646631c47f643350445f0a91f16912483123626f88f0/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/1e82c8d6d23f4853fff412226a2a4891723e01152d197d920623e170ea0a3086/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/bce73074b8e836a143e178d5ba38c0061ab2ca4aa5e2f93899aa53457207f8ca/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/a3bfc24abb6402d46f310323bb5ccc3c677f2dab4be48af7c067a229ddfcf23f/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/138e6e2e06b1084de86a9a43fa516ecafb323ec2ee2c182bf5a7af603f4b9355/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/590dea99b4ea5192c48873aa017be706082f30dcd3daad48d25b278c7e79926a/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/91c59a5095d20c5f97cd7515460febe4d3e658ad7ab7d8a1ea1c7dd1007a52a4/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/4ec9fe09f41755a523f8be5cc67ec8980cfd72a39ffe56f54dbc7cc50653b935/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 17 15:02:47.368687 master-0 kubenswrapper[8018]: I0217 15:02:47.367826 8018 manager.go:217] Machine: {Timestamp:2026-02-17 15:02:47.366845099 +0000 UTC m=+0.119188169 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ff628177d0ed41fb9732e0b0efb95e0a SystemUUID:ff628177-d0ed-41fb-9732-e0b0efb95e0a BootID:1c90f5ae-c817-4d5a-b4dd-067c150502f0 Filesystems:[{Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp DeviceMajor:0 DeviceMinor:112 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86 DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm DeviceMajor:0 DeviceMinor:290 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68 DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/933619de776a30ee8db83753fa79bb4994c3f6de2f880c843e582119c60f8f70/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4be2df82-c77a-4d26-9498-fa3beea54b81/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4 DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7 DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b25a72d-965f-415c-abc9-09612859e9e0/volumes/kubernetes.io~projected/kube-api-access-fv46m DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92 DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874 DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:67 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm DeviceMajor:0 DeviceMinor:307 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4 DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-218 DeviceMajor:0 DeviceMinor:218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:260124ead6b34d5 MacAddress:a6:4c:67:c6:a1:82 Speed:10000 Mtu:8900} {Name:2f085db99c3eb79 MacAddress:de:17:45:1e:8e:ed Speed:10000 Mtu:8900} {Name:509218f044076ea MacAddress:7a:2e:9c:fc:ab:87 Speed:10000 Mtu:8900} {Name:68f6c5cb6453d46 MacAddress:ae:e4:2f:88:66:d9 Speed:10000 Mtu:8900} {Name:798daf69301c189 MacAddress:aa:29:f4:9b:e1:bb Speed:10000 Mtu:8900} {Name:90185a33c582493 MacAddress:5e:8f:e7:15:b5:e5 Speed:10000 Mtu:8900} {Name:af54fa9c62b28e6 MacAddress:ba:8c:40:67:fe:c3 Speed:10000 Mtu:8900} {Name:b52356412bf9fd6 MacAddress:de:ee:89:35:88:30 Speed:10000 Mtu:8900} {Name:b7039f4f79e0da9 MacAddress:a6:31:60:05:6a:83 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7e:ca:dd:4c:57:66 Speed:0 Mtu:8900} {Name:c78b15cceeb9a13 MacAddress:62:3c:02:fa:20:d2 Speed:10000 Mtu:8900} {Name:c9a0cb53cadb332 MacAddress:e6:d7:3d:0e:ab:2a Speed:10000 Mtu:8900} {Name:ec0152f98764cdb MacAddress:82:e3:9e:d4:77:ae Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:79:b8:2d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:97:d0:9b Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:fa:aa:43:f9:eb:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:02:47.368687 master-0 kubenswrapper[8018]: I0217 15:02:47.368657 8018 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:02:47.369228 master-0 kubenswrapper[8018]: I0217 15:02:47.368889 8018 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:02:47.371095 master-0 kubenswrapper[8018]: I0217 15:02:47.371052 8018 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:02:47.371275 master-0 kubenswrapper[8018]: I0217 15:02:47.371228 8018 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:02:47.371489 master-0 kubenswrapper[8018]: I0217 15:02:47.371264 8018 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:02:47.371564 master-0 kubenswrapper[8018]: I0217 15:02:47.371510 8018 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:02:47.371564 master-0 kubenswrapper[8018]: I0217 15:02:47.371519 8018 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:02:47.371564 master-0 kubenswrapper[8018]: I0217 15:02:47.371527 8018 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:02:47.371564 master-0 kubenswrapper[8018]: I0217 15:02:47.371548 8018 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:02:47.371738 master-0 kubenswrapper[8018]: I0217 15:02:47.371722 8018 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:02:47.371828 master-0 kubenswrapper[8018]: I0217 15:02:47.371804 8018 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:02:47.371890 master-0 kubenswrapper[8018]: I0217 15:02:47.371874 8018 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:02:47.371941 master-0 kubenswrapper[8018]: I0217 15:02:47.371894 8018 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:02:47.371941 master-0 kubenswrapper[8018]: I0217 15:02:47.371910 8018 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:02:47.371941 master-0 kubenswrapper[8018]: I0217 15:02:47.371922 8018 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:02:47.371941 master-0 kubenswrapper[8018]: I0217 15:02:47.371932 8018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:02:47.377041 master-0 kubenswrapper[8018]: I0217 15:02:47.376897 8018 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 17 15:02:47.377117 master-0 kubenswrapper[8018]: I0217 15:02:47.377050 8018 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 15:02:47.381729 master-0 kubenswrapper[8018]: I0217 15:02:47.381598 8018 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381733 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381763 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381771 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381784 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381800 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381813 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381832 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381846 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381855 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:02:47.381870 master-0 kubenswrapper[8018]: I0217 15:02:47.381867 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:02:47.382266 master-0 kubenswrapper[8018]: I0217 15:02:47.381883 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:02:47.382266 master-0 kubenswrapper[8018]: I0217 15:02:47.381905 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:02:47.382266 master-0 kubenswrapper[8018]: I0217 15:02:47.381958 8018 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:02:47.382540 master-0 kubenswrapper[8018]: I0217 15:02:47.382367 8018 server.go:1280] "Started kubelet" Feb 17 15:02:47.392204 master-0 kubenswrapper[8018]: I0217 15:02:47.391665 8018 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:02:47.392310 master-0 kubenswrapper[8018]: I0217 15:02:47.390253 8018 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:02:47.393382 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 17 15:02:47.394005 master-0 kubenswrapper[8018]: I0217 15:02:47.393917 8018 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 17 15:02:47.394711 master-0 kubenswrapper[8018]: I0217 15:02:47.394647 8018 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:02:47.395245 master-0 kubenswrapper[8018]: I0217 15:02:47.395198 8018 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:02:47.397065 master-0 kubenswrapper[8018]: I0217 15:02:47.397029 8018 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:02:47.397774 master-0 kubenswrapper[8018]: I0217 15:02:47.397700 8018 server.go:449] "Adding debug handlers to kubelet server" Feb 17 15:02:47.397774 master-0 kubenswrapper[8018]: I0217 15:02:47.397736 8018 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:02:47.397774 master-0 kubenswrapper[8018]: I0217 15:02:47.397767 8018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:02:47.398048 master-0 kubenswrapper[8018]: I0217 15:02:47.397991 8018 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 08:40:44.988015471 +0000 UTC Feb 17 15:02:47.398048 master-0 kubenswrapper[8018]: I0217 15:02:47.398038 8018 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h37m57.589979615s for next certificate rotation Feb 17 15:02:47.398283 master-0 kubenswrapper[8018]: I0217 15:02:47.398249 8018 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:02:47.398283 master-0 kubenswrapper[8018]: I0217 15:02:47.398267 8018 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:02:47.398378 master-0 kubenswrapper[8018]: I0217 15:02:47.398321 8018 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 17 15:02:47.399812 master-0 kubenswrapper[8018]: I0217 15:02:47.399776 8018 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400103 8018 factory.go:153] Registering CRI-O factory Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400130 8018 factory.go:221] Registration of the crio container factory successfully Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400200 8018 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400211 8018 factory.go:55] Registering systemd factory Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400219 8018 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400242 8018 factory.go:103] Registering Raw factory Feb 17 15:02:47.400487 master-0 kubenswrapper[8018]: I0217 15:02:47.400258 8018 manager.go:1196] Started watching for new ooms in manager Feb 17 15:02:47.400904 master-0 kubenswrapper[8018]: I0217 15:02:47.400719 8018 manager.go:319] Starting recovery of all containers Feb 17 15:02:47.401683 master-0 kubenswrapper[8018]: I0217 15:02:47.401632 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874" seLinuxMountContext="" Feb 17 15:02:47.401683 master-0 kubenswrapper[8018]: I0217 15:02:47.401675 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401689 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401702 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401713 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4be2df82-c77a-4d26-9498-fa3beea54b81" volumeName="kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401725 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401738 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401755 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401771 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401782 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6d23570-21d6-4b08-83fc-8b0827c25313" volumeName="kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92" seLinuxMountContext="" Feb 17 15:02:47.401817 master-0 kubenswrapper[8018]: I0217 15:02:47.401794 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e27254-e906-484a-b346-036f898be3ae" volumeName="kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401830 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401843 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401857 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257db04b-7203-4a1d-b3d4-bd4db258a3cc" volumeName="kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401867 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd2c79d-1e10-4f09-8a33-c66598abc99a" volumeName="kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401878 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401906 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401918 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4be2df82-c77a-4d26-9498-fa3beea54b81" volumeName="kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401929 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401941 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401952 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401964 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401975 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.401986 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402003 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402014 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257db04b-7203-4a1d-b3d4-bd4db258a3cc" volumeName="kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402049 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402062 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402075 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402087 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="632fa4c3-b717-432c-8c5f-8d809f69c48b" volumeName="kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402098 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="801742a6-3735-4883-9676-e852dc4173d2" volumeName="kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402108 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="632fa4c3-b717-432c-8c5f-8d809f69c48b" volumeName="kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402124 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402136 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402149 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9579e-7383-421e-95dd-8f8b786817f9" volumeName="kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd" seLinuxMountContext="" Feb 17 15:02:47.402137 master-0 kubenswrapper[8018]: I0217 15:02:47.402161 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e27254-e906-484a-b346-036f898be3ae" volumeName="kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402172 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402184 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd2c79d-1e10-4f09-8a33-c66598abc99a" volumeName="kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402194 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402204 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402216 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402226 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402237 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402250 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402261 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402278 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402290 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402300 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402311 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc76384d-b288-4d30-bc77-f696b62a5f30" volumeName="kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402327 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402339 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6d23570-21d6-4b08-83fc-8b0827c25313" volumeName="kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402350 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402389 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402405 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402417 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402429 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402441 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402474 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402488 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402503 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402514 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402528 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402540 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b25a72d-965f-415c-abc9-09612859e9e0" volumeName="kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402551 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402563 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402574 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402587 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402599 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402611 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf74b8c3-a5a6-4fb9-9d12-3a47c759f699" volumeName="kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402624 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402636 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402646 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402657 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402669 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402680 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402692 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" volumeName="kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402703 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402714 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf74b8c3-a5a6-4fb9-9d12-3a47c759f699" volumeName="kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402725 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402737 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402748 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402758 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402770 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402781 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402791 8018 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs" seLinuxMountContext="" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402803 8018 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:02:47.402900 master-0 kubenswrapper[8018]: I0217 15:02:47.402810 8018 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:02:47.436511 master-0 kubenswrapper[8018]: I0217 15:02:47.436419 8018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:02:47.438683 master-0 kubenswrapper[8018]: I0217 15:02:47.438657 8018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:02:47.438752 master-0 kubenswrapper[8018]: I0217 15:02:47.438701 8018 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:02:47.438752 master-0 kubenswrapper[8018]: I0217 15:02:47.438723 8018 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:02:47.438831 master-0 kubenswrapper[8018]: E0217 15:02:47.438768 8018 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:02:47.440653 master-0 kubenswrapper[8018]: I0217 15:02:47.440604 8018 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:02:47.443809 master-0 kubenswrapper[8018]: I0217 15:02:47.443764 8018 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:02:47.457462 master-0 kubenswrapper[8018]: I0217 15:02:47.457415 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 17 15:02:47.457845 master-0 kubenswrapper[8018]: I0217 15:02:47.457813 8018 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" exitCode=1 Feb 17 15:02:47.457930 master-0 kubenswrapper[8018]: I0217 15:02:47.457915 8018 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20" exitCode=0 Feb 17 15:02:47.464286 master-0 kubenswrapper[8018]: I0217 15:02:47.464246 8018 generic.go:334] "Generic (PLEG): container finished" podID="9a905fb6-17d4-413b-9107-859c804ce906" containerID="4af044cd84dfd56b4c3319dc9513fdcbc730d3ab6bf935acd230ad188ae43052" exitCode=0 Feb 17 15:02:47.469248 master-0 kubenswrapper[8018]: I0217 15:02:47.469228 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9" exitCode=1 Feb 17 15:02:47.470440 master-0 kubenswrapper[8018]: I0217 15:02:47.470418 8018 generic.go:334] "Generic (PLEG): container finished" podID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerID="b19e391b0150ed3b7b034d7cfb9dec3399203df0724feccc18bf70218b47fb07" exitCode=0 Feb 17 15:02:47.471634 master-0 kubenswrapper[8018]: I0217 15:02:47.471605 8018 generic.go:334] "Generic (PLEG): container finished" podID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerID="b13d746fb33147c34bbdc9c278d3605b58fe9a5ed8f1e19a36f86fe284caa4b2" exitCode=0 Feb 17 15:02:47.480317 master-0 kubenswrapper[8018]: I0217 15:02:47.480285 8018 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a" exitCode=0 Feb 17 15:02:47.486046 master-0 kubenswrapper[8018]: I0217 15:02:47.486017 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="b12f57b0bcc09e05fc64e8bd7a3e3439eada3a066486077463244aa7f48a9765" exitCode=0 Feb 17 15:02:47.486046 master-0 kubenswrapper[8018]: I0217 15:02:47.486039 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="58ed4f24a4a8563ec3660532e43504b78aecdeaa56673d4b14d15679424a7551" exitCode=0 Feb 17 15:02:47.486168 master-0 kubenswrapper[8018]: I0217 15:02:47.486050 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="921ee0fd3551059043b76ac59a478c682da16c6ee7724deecc9c4ab0ac65da91" exitCode=0 Feb 17 15:02:47.486168 master-0 kubenswrapper[8018]: I0217 15:02:47.486061 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="a8fe5731cc729bce660d47070861b2907343fcae8bee470838edf68c6e2b5e34" exitCode=0 Feb 17 15:02:47.486168 master-0 kubenswrapper[8018]: I0217 15:02:47.486070 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="ecd77d78fcca655bc8210302308e24b74646b466ebece2fff52e85f8b57c4842" exitCode=0 Feb 17 15:02:47.486168 master-0 kubenswrapper[8018]: I0217 15:02:47.486079 8018 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="2f86c60a93c3453ced4f5b52ce187e665f2ac8baeed7a329b64029f9d992f515" exitCode=0 Feb 17 15:02:47.511581 master-0 kubenswrapper[8018]: I0217 15:02:47.511538 8018 manager.go:324] Recovery completed Feb 17 15:02:47.538971 master-0 kubenswrapper[8018]: E0217 15:02:47.538925 8018 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 17 15:02:47.543691 master-0 kubenswrapper[8018]: I0217 15:02:47.543663 8018 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:02:47.543691 master-0 kubenswrapper[8018]: I0217 15:02:47.543683 8018 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:02:47.543795 master-0 kubenswrapper[8018]: I0217 15:02:47.543700 8018 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:02:47.543888 master-0 kubenswrapper[8018]: I0217 15:02:47.543863 8018 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 17 15:02:47.543928 master-0 kubenswrapper[8018]: I0217 15:02:47.543882 8018 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 17 15:02:47.543928 master-0 kubenswrapper[8018]: I0217 15:02:47.543904 8018 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 17 15:02:47.543928 master-0 kubenswrapper[8018]: I0217 15:02:47.543912 8018 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 17 15:02:47.543928 master-0 kubenswrapper[8018]: I0217 15:02:47.543919 8018 policy_none.go:49] "None policy: Start" Feb 17 15:02:47.545169 master-0 kubenswrapper[8018]: I0217 15:02:47.545151 8018 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:02:47.545271 master-0 kubenswrapper[8018]: I0217 15:02:47.545259 8018 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:02:47.545527 master-0 kubenswrapper[8018]: I0217 15:02:47.545513 8018 state_mem.go:75] "Updated machine memory state" Feb 17 15:02:47.545605 master-0 kubenswrapper[8018]: I0217 15:02:47.545594 8018 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 17 15:02:47.553922 master-0 kubenswrapper[8018]: I0217 15:02:47.553898 8018 manager.go:334] "Starting Device Plugin manager" Feb 17 15:02:47.554105 master-0 kubenswrapper[8018]: I0217 15:02:47.554082 8018 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:02:47.554105 master-0 kubenswrapper[8018]: I0217 15:02:47.554103 8018 server.go:79] "Starting device plugin registration server" Feb 17 15:02:47.554487 master-0 kubenswrapper[8018]: I0217 15:02:47.554477 8018 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:02:47.554550 master-0 kubenswrapper[8018]: I0217 15:02:47.554489 8018 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:02:47.554634 master-0 kubenswrapper[8018]: I0217 15:02:47.554619 8018 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:02:47.554776 master-0 kubenswrapper[8018]: I0217 15:02:47.554765 8018 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:02:47.554856 master-0 kubenswrapper[8018]: I0217 15:02:47.554836 8018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:02:47.655249 master-0 kubenswrapper[8018]: I0217 15:02:47.655146 8018 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:02:47.656910 master-0 kubenswrapper[8018]: I0217 15:02:47.656870 8018 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:02:47.656969 master-0 kubenswrapper[8018]: I0217 15:02:47.656925 8018 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:02:47.656969 master-0 kubenswrapper[8018]: I0217 15:02:47.656937 8018 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:02:47.657051 master-0 kubenswrapper[8018]: I0217 15:02:47.656994 8018 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:02:47.677091 master-0 kubenswrapper[8018]: I0217 15:02:47.677035 8018 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 17 15:02:47.677238 master-0 kubenswrapper[8018]: I0217 15:02:47.677122 8018 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 17 15:02:47.741676 master-0 kubenswrapper[8018]: I0217 15:02:47.741595 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Feb 17 15:02:47.742203 master-0 kubenswrapper[8018]: I0217 15:02:47.742133 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc"} Feb 17 15:02:47.742260 master-0 kubenswrapper[8018]: I0217 15:02:47.742211 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427"} Feb 17 15:02:47.742260 master-0 kubenswrapper[8018]: I0217 15:02:47.742243 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"518b836a67d98b0cf5a2e8d843574e61038c30a6058fcd6123417dc9c4975d78"} Feb 17 15:02:47.742260 master-0 kubenswrapper[8018]: I0217 15:02:47.742255 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a"} Feb 17 15:02:47.742373 master-0 kubenswrapper[8018]: I0217 15:02:47.742267 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20"} Feb 17 15:02:47.742373 master-0 kubenswrapper[8018]: I0217 15:02:47.742299 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90"} Feb 17 15:02:47.742373 master-0 kubenswrapper[8018]: I0217 15:02:47.742335 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b69c0b3c7fbfdbafc398bd01403bacf73eac4d046a3117ba213930fc148f175" Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742377 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742396 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"8e4f485693ac9a91f7bc7a84cdde902f639454acfd53f8608408575f632d2ecf"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742408 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742422 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"933619de776a30ee8db83753fa79bb4994c3f6de2f880c843e582119c60f8f70"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742476 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817" Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742490 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44297b578b73799787105eb3efe9db346703e6cb92e011be7af4c2d78212c2e0" Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742498 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"8105fa4b966940334c286ed94a1f0129c72a04a09b1bf683900cc1744fb06fec"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742507 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"4d0630e2330edb92a7d17fc9b9a41a0b13733df95ae437b7fe0b5957cb60ed7a"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742516 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"6a5d363cdb7b8bdbfad3ed76750d978c8f44d1960c0e0c7352027f659a456edd"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742585 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742595 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6"} Feb 17 15:02:47.742631 master-0 kubenswrapper[8018]: I0217 15:02:47.742602 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a"} Feb 17 15:02:47.743100 master-0 kubenswrapper[8018]: I0217 15:02:47.742611 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341"} Feb 17 15:02:47.763446 master-0 kubenswrapper[8018]: E0217 15:02:47.763392 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.763776 master-0 kubenswrapper[8018]: E0217 15:02:47.763744 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.764102 master-0 kubenswrapper[8018]: E0217 15:02:47.764076 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.764174 master-0 kubenswrapper[8018]: E0217 15:02:47.764134 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.764488 master-0 kubenswrapper[8018]: W0217 15:02:47.764471 8018 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 17 15:02:47.764593 master-0 kubenswrapper[8018]: E0217 15:02:47.764498 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.846846 master-0 kubenswrapper[8018]: I0217 15:02:47.846726 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.846846 master-0 kubenswrapper[8018]: I0217 15:02:47.846816 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.846846 master-0 kubenswrapper[8018]: I0217 15:02:47.846860 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.846917 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.846949 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.846980 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847011 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847043 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847072 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847102 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847132 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847178 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847218 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847248 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847273 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.847306 master-0 kubenswrapper[8018]: I0217 15:02:47.847308 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.848233 master-0 kubenswrapper[8018]: I0217 15:02:47.847445 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.948597 master-0 kubenswrapper[8018]: I0217 15:02:47.948412 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.948597 master-0 kubenswrapper[8018]: I0217 15:02:47.948492 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.948597 master-0 kubenswrapper[8018]: I0217 15:02:47.948523 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.948959 master-0 kubenswrapper[8018]: I0217 15:02:47.948644 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.948959 master-0 kubenswrapper[8018]: I0217 15:02:47.948792 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.948959 master-0 kubenswrapper[8018]: I0217 15:02:47.948829 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.948959 master-0 kubenswrapper[8018]: I0217 15:02:47.948916 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.948959 master-0 kubenswrapper[8018]: I0217 15:02:47.948956 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949004 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949036 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949065 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949119 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949119 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:47.949299 master-0 kubenswrapper[8018]: I0217 15:02:47.949122 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949304 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949370 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949412 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949312 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949482 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949520 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949536 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949592 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949618 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949634 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949672 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949708 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949740 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949769 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.949777 master-0 kubenswrapper[8018]: I0217 15:02:47.949788 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.950747 master-0 kubenswrapper[8018]: I0217 15:02:47.949793 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.950747 master-0 kubenswrapper[8018]: I0217 15:02:47.949831 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:02:47.950747 master-0 kubenswrapper[8018]: I0217 15:02:47.949874 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:47.950747 master-0 kubenswrapper[8018]: I0217 15:02:47.949907 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:47.950747 master-0 kubenswrapper[8018]: I0217 15:02:47.949938 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:48.372661 master-0 kubenswrapper[8018]: I0217 15:02:48.372610 8018 apiserver.go:52] "Watching apiserver" Feb 17 15:02:48.386578 master-0 kubenswrapper[8018]: I0217 15:02:48.386516 8018 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:02:48.387885 master-0 kubenswrapper[8018]: I0217 15:02:48.387802 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vdgrn","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9","openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh","openshift-cluster-version/cluster-version-operator-76959b6567-v49tq","openshift-network-node-identity/network-node-identity-xwftw","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245","kube-system/bootstrap-kube-scheduler-master-0","openshift-multus/network-metrics-daemon-bnllz","openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p","openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg","openshift-network-diagnostics/network-check-target-f25s7","openshift-dns-operator/dns-operator-86b8869b79-lmqrr","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-additional-cni-plugins-9nv95","kube-system/bootstrap-kube-controller-manager-master-0","openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs","openshift-multus/multus-admission-controller-7c64d55f8-fzfsp","openshift-network-operator/network-operator-6fcf4c966-l24cg","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89","openshift-multus/multus-9r5rl","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph","openshift-network-operator/iptables-alerter-v2h9q","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b","openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n","openshift-etcd/etcd-master-0-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9","openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v","assisted-installer/assisted-installer-controller-5fwlz","openshift-authentication-operator/authentication-operator-755d954778-jrdqm"] Feb 17 15:02:48.388127 master-0 kubenswrapper[8018]: I0217 15:02:48.388096 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:02:48.388688 master-0 kubenswrapper[8018]: I0217 15:02:48.388480 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.388688 master-0 kubenswrapper[8018]: I0217 15:02:48.388615 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.390907 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.390927 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.390977 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.390994 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.391011 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:02:48.391315 master-0 kubenswrapper[8018]: I0217 15:02:48.391023 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.392630 master-0 kubenswrapper[8018]: I0217 15:02:48.392188 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.392960 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393055 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393164 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393201 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393227 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393258 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393229 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:02:48.393311 master-0 kubenswrapper[8018]: I0217 15:02:48.393320 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393396 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393399 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393430 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393450 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393585 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:02:48.393811 master-0 kubenswrapper[8018]: I0217 15:02:48.393669 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 17 15:02:48.394603 master-0 kubenswrapper[8018]: I0217 15:02:48.394286 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:48.394603 master-0 kubenswrapper[8018]: I0217 15:02:48.394356 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 17 15:02:48.394603 master-0 kubenswrapper[8018]: I0217 15:02:48.394370 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:48.394980 master-0 kubenswrapper[8018]: I0217 15:02:48.394887 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:48.396505 master-0 kubenswrapper[8018]: I0217 15:02:48.396426 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.396671 master-0 kubenswrapper[8018]: I0217 15:02:48.396628 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.396748 master-0 kubenswrapper[8018]: I0217 15:02:48.396634 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:02:48.396748 master-0 kubenswrapper[8018]: I0217 15:02:48.396696 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:02:48.397135 master-0 kubenswrapper[8018]: I0217 15:02:48.397105 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:02:48.401063 master-0 kubenswrapper[8018]: I0217 15:02:48.401005 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:02:48.402806 master-0 kubenswrapper[8018]: I0217 15:02:48.402757 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:02:48.403900 master-0 kubenswrapper[8018]: I0217 15:02:48.403844 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.405038 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.405269 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.405656 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.405875 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.405940 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406168 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406233 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406343 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406555 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406882 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.406922 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407344 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407476 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407775 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407806 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407850 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407913 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.407933 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408105 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408121 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408165 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408238 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408117 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:02:48.408731 master-0 kubenswrapper[8018]: I0217 15:02:48.408372 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 17 15:02:48.412802 master-0 kubenswrapper[8018]: I0217 15:02:48.412759 8018 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.416492 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.416694 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.416722 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.416974 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.417082 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.417239 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.417846 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.417919 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.417953 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.418091 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.418206 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.418745 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427712 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427782 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427801 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427713 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427713 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428033 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.427980 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428136 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428140 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428205 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428279 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.428808 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.429101 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.429908 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.430226 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.430937 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.431340 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.431754 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.431810 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.431938 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432003 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432022 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.431846 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432103 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432142 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432183 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432214 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432289 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432307 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432184 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432490 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:02:48.432434 master-0 kubenswrapper[8018]: I0217 15:02:48.432519 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:02:48.435893 master-0 kubenswrapper[8018]: I0217 15:02:48.432596 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:02:48.435893 master-0 kubenswrapper[8018]: I0217 15:02:48.432734 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:02:48.435893 master-0 kubenswrapper[8018]: I0217 15:02:48.432755 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:02:48.435893 master-0 kubenswrapper[8018]: I0217 15:02:48.435356 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:02:48.435893 master-0 kubenswrapper[8018]: I0217 15:02:48.435368 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:02:48.436603 master-0 kubenswrapper[8018]: I0217 15:02:48.436560 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:02:48.437500 master-0 kubenswrapper[8018]: I0217 15:02:48.437356 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:02:48.442599 master-0 kubenswrapper[8018]: I0217 15:02:48.442557 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:02:48.445406 master-0 kubenswrapper[8018]: I0217 15:02:48.445373 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:02:48.446715 master-0 kubenswrapper[8018]: I0217 15:02:48.446680 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:02:48.453857 master-0 kubenswrapper[8018]: I0217 15:02:48.453799 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.453981 master-0 kubenswrapper[8018]: I0217 15:02:48.453886 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.453981 master-0 kubenswrapper[8018]: I0217 15:02:48.453919 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.453981 master-0 kubenswrapper[8018]: I0217 15:02:48.453947 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:48.453981 master-0 kubenswrapper[8018]: I0217 15:02:48.453976 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454001 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454028 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454052 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454078 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454107 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:48.454146 master-0 kubenswrapper[8018]: I0217 15:02:48.454132 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.454358 master-0 kubenswrapper[8018]: I0217 15:02:48.454158 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.454358 master-0 kubenswrapper[8018]: I0217 15:02:48.454203 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454358 master-0 kubenswrapper[8018]: I0217 15:02:48.454218 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.454358 master-0 kubenswrapper[8018]: I0217 15:02:48.454314 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:48.454358 master-0 kubenswrapper[8018]: I0217 15:02:48.454343 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454371 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454382 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454394 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454419 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454445 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454498 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454551 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454554 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.454566 master-0 kubenswrapper[8018]: I0217 15:02:48.454554 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454614 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454616 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454650 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454677 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454697 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454722 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454730 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454747 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454758 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454770 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454808 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454812 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454852 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454869 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.454876 master-0 kubenswrapper[8018]: I0217 15:02:48.454888 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454906 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454923 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454938 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454961 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454977 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455011 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455034 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455041 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455056 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455062 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455074 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455091 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455107 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455124 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455128 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455139 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455163 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455180 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455228 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455252 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455251 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.454869 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455284 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455306 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455321 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455338 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455355 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.455442 master-0 kubenswrapper[8018]: I0217 15:02:48.455355 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455633 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455649 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455665 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455675 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455691 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455797 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455838 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455838 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455887 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455914 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455940 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455964 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.455987 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456192 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456238 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456257 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456287 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456312 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456336 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456359 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456382 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456383 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456413 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456429 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456447 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456477 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456493 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456508 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456523 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456541 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456558 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456608 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456625 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456642 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456658 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456692 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456731 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456744 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456898 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456974 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456982 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.456750 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457043 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457058 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457093 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457140 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457193 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457194 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457270 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457303 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457345 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457365 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457368 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457412 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457445 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457446 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457519 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457548 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457566 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457623 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457643 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457665 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457674 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457681 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457773 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457790 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457806 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457825 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457806 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457860 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457883 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457928 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457945 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457960 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457977 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.457986 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458067 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458096 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458116 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458130 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458147 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458221 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458314 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.458233 master-0 kubenswrapper[8018]: I0217 15:02:48.458318 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458438 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458566 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458573 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458594 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458629 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458647 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458682 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458702 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458731 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458748 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458755 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458764 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458844 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458867 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458886 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458902 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458921 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458932 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458958 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458974 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.458989 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459014 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459032 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459037 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459050 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459068 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459085 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459102 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459191 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459341 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459475 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459504 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:48.461531 master-0 kubenswrapper[8018]: I0217 15:02:48.459658 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.475286 master-0 kubenswrapper[8018]: I0217 15:02:48.475225 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:02:48.477803 master-0 kubenswrapper[8018]: I0217 15:02:48.477765 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:48.495500 master-0 kubenswrapper[8018]: I0217 15:02:48.495451 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:02:48.496628 master-0 kubenswrapper[8018]: I0217 15:02:48.496597 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.515476 master-0 kubenswrapper[8018]: I0217 15:02:48.515391 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:02:48.525307 master-0 kubenswrapper[8018]: I0217 15:02:48.525261 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.537983 master-0 kubenswrapper[8018]: I0217 15:02:48.537667 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:02:48.538888 master-0 kubenswrapper[8018]: I0217 15:02:48.538841 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560580 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560661 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560761 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560772 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560879 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560941 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.560980 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: I0217 15:02:48.561021 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561074 master-0 kubenswrapper[8018]: E0217 15:02:48.561070 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561128 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: E0217 15:02:48.561150 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.061123491 +0000 UTC m=+1.813466541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561215 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561297 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561368 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561377 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561255 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561502 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561542 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: E0217 15:02:48.561567 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561586 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561613 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: E0217 15:02:48.561639 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.061614143 +0000 UTC m=+1.813957223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561638 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561674 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561761 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561761 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: E0217 15:02:48.561789 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:48.561790 master-0 kubenswrapper[8018]: I0217 15:02:48.561803 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561832 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.561860 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.061835378 +0000 UTC m=+1.814178468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561868 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561889 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561905 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561796 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561949 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561977 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562004 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.561952 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562032 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562108 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562154 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.062140226 +0000 UTC m=+1.814483306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562156 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562196 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.062185107 +0000 UTC m=+1.814528197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562209 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562235 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562246 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.062234608 +0000 UTC m=+1.814577748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562109 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562281 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.062273389 +0000 UTC m=+1.814616559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562313 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562340 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562370 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562395 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562429 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562475 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562501 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562515 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562527 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562562 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562569 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562612 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562626 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562646 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562664 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562681 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562693 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562727 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562747 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562765 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562807 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562804 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: E0217 15:02:48.562846 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.062827962 +0000 UTC m=+1.815171122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562906 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562945 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.562980 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.562939 master-0 kubenswrapper[8018]: I0217 15:02:48.563009 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563051 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563087 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563120 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563156 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563191 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563289 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563391 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563426 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563485 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563515 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563561 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563616 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563666 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563709 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563799 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563851 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563948 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.563994 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564074 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564122 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.064106183 +0000 UTC m=+1.816449273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564161 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564226 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564265 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.064252616 +0000 UTC m=+1.816595696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564324 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564359 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.064346818 +0000 UTC m=+1.816689898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564396 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564442 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564515 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564577 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564614 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564667 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564718 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564769 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564843 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.564881 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.064867971 +0000 UTC m=+1.817211051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.564935 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.565006 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: E0217 15:02:48.565052 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:49.065040425 +0000 UTC m=+1.817383505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:48.565709 master-0 kubenswrapper[8018]: I0217 15:02:48.565091 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:49.070740 master-0 kubenswrapper[8018]: I0217 15:02:49.070649 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:49.070740 master-0 kubenswrapper[8018]: I0217 15:02:49.070745 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.070784 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.070830 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.070922 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071025 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.071004426 +0000 UTC m=+2.823347476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.070940 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071110 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.071084857 +0000 UTC m=+2.823427907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071111 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071207 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071236 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.0712123 +0000 UTC m=+2.823555350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071269 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071282 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071304 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.071290152 +0000 UTC m=+2.823633202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071302 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071318 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.071310423 +0000 UTC m=+2.823653473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071375 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071428 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071450 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071494 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.071480987 +0000 UTC m=+2.823824037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071548 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071586 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071609 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071632 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: E0217 15:02:49.071705 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.07163484 +0000 UTC m=+2.823977890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071792 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:49.071970 master-0 kubenswrapper[8018]: I0217 15:02:49.071967 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: I0217 15:02:49.072025 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: I0217 15:02:49.072079 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072288 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072332 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072318997 +0000 UTC m=+2.824662277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072399 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072433 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072417659 +0000 UTC m=+2.824760709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072485 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072528 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072567 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072556783 +0000 UTC m=+2.824899833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072603 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072574543 +0000 UTC m=+2.824917593 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072628 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072616544 +0000 UTC m=+2.824959604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:49.072696 master-0 kubenswrapper[8018]: E0217 15:02:49.072656 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:50.072637045 +0000 UTC m=+2.824980105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:50.088008 master-0 kubenswrapper[8018]: I0217 15:02:50.087841 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:50.088008 master-0 kubenswrapper[8018]: I0217 15:02:50.087961 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088112 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088110 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088182 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.088162281 +0000 UTC m=+4.840505431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088405 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.088389097 +0000 UTC m=+4.840732307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.088446 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.088570 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088623 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.088636 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088687 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.088657253 +0000 UTC m=+4.841000353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088725 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.088728 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088769 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.088753185 +0000 UTC m=+4.841096345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088874 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088961 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.08893235 +0000 UTC m=+4.841275450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.088991 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.089048 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.089082 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.089121 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.089104454 +0000 UTC m=+4.841447624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: I0217 15:02:50.089196 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:50.089234 master-0 kubenswrapper[8018]: E0217 15:02:50.089244 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.089224526 +0000 UTC m=+4.841567596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089305 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089392 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.08936901 +0000 UTC m=+4.841712100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: I0217 15:02:50.089547 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: I0217 15:02:50.089660 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089710 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: I0217 15:02:50.089725 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089788 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.08976407 +0000 UTC m=+4.842107170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089864 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089915 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.089934 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.089911463 +0000 UTC m=+4.842254573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: I0217 15:02:50.090077 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.090135 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.090101398 +0000 UTC m=+4.842444488 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: I0217 15:02:50.090190 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.090207 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.090269 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.090249481 +0000 UTC m=+4.842592581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.090368 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:50.090777 master-0 kubenswrapper[8018]: E0217 15:02:50.090424 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:52.090408475 +0000 UTC m=+4.842751565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:52.120897 master-0 kubenswrapper[8018]: I0217 15:02:52.120744 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: I0217 15:02:52.121041 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121079 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121264 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.121223215 +0000 UTC m=+8.873566305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: I0217 15:02:52.121127 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: I0217 15:02:52.121345 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121262 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: I0217 15:02:52.121435 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121484 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.12143718 +0000 UTC m=+8.873780270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121322 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121570 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.121550822 +0000 UTC m=+8.873893882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:52.121588 master-0 kubenswrapper[8018]: E0217 15:02:52.121571 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:52.122027 master-0 kubenswrapper[8018]: E0217 15:02:52.121630 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.121615234 +0000 UTC m=+8.873958294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:52.122027 master-0 kubenswrapper[8018]: E0217 15:02:52.121659 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:52.122027 master-0 kubenswrapper[8018]: E0217 15:02:52.121744 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.121725746 +0000 UTC m=+8.874069086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:52.122027 master-0 kubenswrapper[8018]: I0217 15:02:52.121824 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:52.122027 master-0 kubenswrapper[8018]: E0217 15:02:52.122005 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: I0217 15:02:52.122032 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: E0217 15:02:52.122072 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122053444 +0000 UTC m=+8.874396544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: I0217 15:02:52.122141 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: E0217 15:02:52.122164 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: I0217 15:02:52.122185 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:52.122208 master-0 kubenswrapper[8018]: E0217 15:02:52.122211 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122194437 +0000 UTC m=+8.874537497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122267 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122304 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122335 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.12232022 +0000 UTC m=+8.874663310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122349 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122396 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122371331 +0000 UTC m=+8.874714561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: I0217 15:02:52.122310 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: E0217 15:02:52.122439 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122417952 +0000 UTC m=+8.874761282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: I0217 15:02:52.122537 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:52.122578 master-0 kubenswrapper[8018]: I0217 15:02:52.122578 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: I0217 15:02:52.122620 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122710 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122773 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122777 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122806 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122783272 +0000 UTC m=+8.875126552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122843 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122827713 +0000 UTC m=+8.875170803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:52.122910 master-0 kubenswrapper[8018]: E0217 15:02:52.122870 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:02:56.122856854 +0000 UTC m=+8.875199944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:55.560344 master-0 kubenswrapper[8018]: E0217 15:02:55.560146 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.121s" Feb 17 15:02:55.566365 master-0 kubenswrapper[8018]: I0217 15:02:55.566315 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:55.566365 master-0 kubenswrapper[8018]: I0217 15:02:55.566369 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:55.566572 master-0 kubenswrapper[8018]: I0217 15:02:55.566384 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:55.566572 master-0 kubenswrapper[8018]: I0217 15:02:55.566396 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:55.566572 master-0 kubenswrapper[8018]: I0217 15:02:55.566488 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:56.179372 master-0 kubenswrapper[8018]: I0217 15:02:56.179291 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:56.179616 master-0 kubenswrapper[8018]: E0217 15:02:56.179494 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:02:56.179616 master-0 kubenswrapper[8018]: E0217 15:02:56.179576 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.179555722 +0000 UTC m=+16.931898772 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:02:56.179616 master-0 kubenswrapper[8018]: I0217 15:02:56.179515 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:56.179716 master-0 kubenswrapper[8018]: E0217 15:02:56.179619 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:02:56.179716 master-0 kubenswrapper[8018]: I0217 15:02:56.179668 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:56.179716 master-0 kubenswrapper[8018]: E0217 15:02:56.179678 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.179663285 +0000 UTC m=+16.932006335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:02:56.179716 master-0 kubenswrapper[8018]: I0217 15:02:56.179709 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:56.179816 master-0 kubenswrapper[8018]: E0217 15:02:56.179724 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:56.179816 master-0 kubenswrapper[8018]: I0217 15:02:56.179741 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:56.179816 master-0 kubenswrapper[8018]: E0217 15:02:56.179745 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.179739106 +0000 UTC m=+16.932082156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:02:56.179816 master-0 kubenswrapper[8018]: E0217 15:02:56.179792 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:02:56.179816 master-0 kubenswrapper[8018]: E0217 15:02:56.179813 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.179806728 +0000 UTC m=+16.932149778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:02:56.180015 master-0 kubenswrapper[8018]: E0217 15:02:56.179841 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:56.180015 master-0 kubenswrapper[8018]: E0217 15:02:56.179859 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.179854009 +0000 UTC m=+16.932197059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:02:56.180015 master-0 kubenswrapper[8018]: I0217 15:02:56.179904 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:56.180103 master-0 kubenswrapper[8018]: I0217 15:02:56.180019 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:56.180103 master-0 kubenswrapper[8018]: E0217 15:02:56.180081 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:02:56.180152 master-0 kubenswrapper[8018]: I0217 15:02:56.180097 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:56.180152 master-0 kubenswrapper[8018]: E0217 15:02:56.180127 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180118735 +0000 UTC m=+16.932461775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:02:56.180207 master-0 kubenswrapper[8018]: E0217 15:02:56.180190 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:02:56.180207 master-0 kubenswrapper[8018]: I0217 15:02:56.180196 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:56.180263 master-0 kubenswrapper[8018]: E0217 15:02:56.180225 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180213568 +0000 UTC m=+16.932556628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:02:56.180263 master-0 kubenswrapper[8018]: E0217 15:02:56.180236 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:02:56.180263 master-0 kubenswrapper[8018]: E0217 15:02:56.180247 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:02:56.180263 master-0 kubenswrapper[8018]: I0217 15:02:56.180254 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: E0217 15:02:56.180261 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180254869 +0000 UTC m=+16.932597919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: E0217 15:02:56.180289 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: E0217 15:02:56.180303 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.18029261 +0000 UTC m=+16.932635670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: I0217 15:02:56.180322 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: E0217 15:02:56.180368 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: I0217 15:02:56.180383 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:56.180397 master-0 kubenswrapper[8018]: E0217 15:02:56.180389 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180382732 +0000 UTC m=+16.932725782 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: E0217 15:02:56.180416 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180406543 +0000 UTC m=+16.932749613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: E0217 15:02:56.180424 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: I0217 15:02:56.180433 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: E0217 15:02:56.180497 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180434743 +0000 UTC m=+16.932777793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: E0217 15:02:56.180502 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:02:56.180657 master-0 kubenswrapper[8018]: E0217 15:02:56.180526 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.180519965 +0000 UTC m=+16.932863015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:02:56.296224 master-0 kubenswrapper[8018]: I0217 15:02:56.296142 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:56.380659 master-0 kubenswrapper[8018]: I0217 15:02:56.380569 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:56.384490 master-0 kubenswrapper[8018]: I0217 15:02:56.384426 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:56.525188 master-0 kubenswrapper[8018]: I0217 15:02:56.525069 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:56.525760 master-0 kubenswrapper[8018]: E0217 15:02:56.525723 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:02:56.526379 master-0 kubenswrapper[8018]: W0217 15:02:56.526349 8018 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 17 15:02:56.526498 master-0 kubenswrapper[8018]: E0217 15:02:56.526405 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:02:56.528009 master-0 kubenswrapper[8018]: E0217 15:02:56.527973 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:56.561102 master-0 kubenswrapper[8018]: I0217 15:02:56.561053 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:56.561669 master-0 kubenswrapper[8018]: I0217 15:02:56.561623 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:02:56.561777 master-0 kubenswrapper[8018]: E0217 15:02:56.561752 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:56.561848 master-0 kubenswrapper[8018]: I0217 15:02:56.561757 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:02:56.562256 master-0 kubenswrapper[8018]: I0217 15:02:56.562215 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:02:56.562623 master-0 kubenswrapper[8018]: I0217 15:02:56.562596 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:02:56.563220 master-0 kubenswrapper[8018]: I0217 15:02:56.563191 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:02:56.563388 master-0 kubenswrapper[8018]: I0217 15:02:56.563348 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:02:56.564120 master-0 kubenswrapper[8018]: I0217 15:02:56.564088 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:02:56.564186 master-0 kubenswrapper[8018]: I0217 15:02:56.564149 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:02:56.567474 master-0 kubenswrapper[8018]: I0217 15:02:56.564713 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:02:56.567474 master-0 kubenswrapper[8018]: I0217 15:02:56.564831 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:02:56.567474 master-0 kubenswrapper[8018]: I0217 15:02:56.566156 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:02:56.567474 master-0 kubenswrapper[8018]: I0217 15:02:56.566758 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:02:56.575478 master-0 kubenswrapper[8018]: I0217 15:02:56.571369 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:02:56.575478 master-0 kubenswrapper[8018]: I0217 15:02:56.571963 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:02:56.575478 master-0 kubenswrapper[8018]: I0217 15:02:56.572633 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:02:56.579472 master-0 kubenswrapper[8018]: I0217 15:02:56.577701 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.582498 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.583108 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.583629 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.583976 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.584385 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.584859 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.584970 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.585046 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:56.585474 master-0 kubenswrapper[8018]: I0217 15:02:56.585473 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:02:56.589719 master-0 kubenswrapper[8018]: I0217 15:02:56.588820 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:02:56.589719 master-0 kubenswrapper[8018]: I0217 15:02:56.589099 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:02:56.589719 master-0 kubenswrapper[8018]: I0217 15:02:56.589209 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:02:56.589719 master-0 kubenswrapper[8018]: I0217 15:02:56.589431 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:02:56.590640 master-0 kubenswrapper[8018]: I0217 15:02:56.590214 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:02:56.590640 master-0 kubenswrapper[8018]: I0217 15:02:56.590593 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:02:56.592726 master-0 kubenswrapper[8018]: I0217 15:02:56.592660 8018 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:02:56.592726 master-0 kubenswrapper[8018]: I0217 15:02:56.592667 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:02:56.601393 master-0 kubenswrapper[8018]: I0217 15:02:56.601341 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:56.794069 master-0 kubenswrapper[8018]: I0217 15:02:56.794029 8018 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:02:56.803498 master-0 kubenswrapper[8018]: I0217 15:02:56.802040 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:02:57.146149 master-0 kubenswrapper[8018]: I0217 15:02:57.145744 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-f25s7"] Feb 17 15:02:57.498219 master-0 kubenswrapper[8018]: I0217 15:02:57.497896 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:02:57.521963 master-0 kubenswrapper[8018]: I0217 15:02:57.521872 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"290f694e7d12ca9521306200e6fad40d6869689c4b381a230ebfe0d9ab67ca09"} Feb 17 15:02:57.531798 master-0 kubenswrapper[8018]: I0217 15:02:57.531568 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"0ca9078aff730fc3a330cc56d95ecaf3845aab699d6709c0f7903274534d22bb"} Feb 17 15:02:57.534228 master-0 kubenswrapper[8018]: I0217 15:02:57.534100 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerStarted","Data":"acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60"} Feb 17 15:02:57.535200 master-0 kubenswrapper[8018]: I0217 15:02:57.535173 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e"} Feb 17 15:02:57.542590 master-0 kubenswrapper[8018]: I0217 15:02:57.536497 8018 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="dbd9a864617d9861c878175db961027136a5f024e25d1d1a8f2532ea54b002da" exitCode=0 Feb 17 15:02:57.542590 master-0 kubenswrapper[8018]: I0217 15:02:57.536583 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"dbd9a864617d9861c878175db961027136a5f024e25d1d1a8f2532ea54b002da"} Feb 17 15:02:57.545202 master-0 kubenswrapper[8018]: I0217 15:02:57.545167 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"66dd210cb26e47fd54a1792f8f197ef08337df2f55d0c4058d8d526e9bd894c8"} Feb 17 15:02:57.546532 master-0 kubenswrapper[8018]: I0217 15:02:57.546505 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"4c453c258107dc05c66b4fe7dfb751fa16a6ada9afb337ed9bd51bf0bf1e157f"} Feb 17 15:02:57.547488 master-0 kubenswrapper[8018]: I0217 15:02:57.547436 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"bf1c4446a3533f26fa5487fb18cd78bb806fca2fbee2a1ee4a787dfdef4578a7"} Feb 17 15:02:57.549832 master-0 kubenswrapper[8018]: I0217 15:02:57.549783 8018 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="71bdfb60886bbb8d8fa44c7be910c5770371e11fcb5309d4a7d66f5e45dddf82" exitCode=0 Feb 17 15:02:57.549915 master-0 kubenswrapper[8018]: I0217 15:02:57.549889 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"71bdfb60886bbb8d8fa44c7be910c5770371e11fcb5309d4a7d66f5e45dddf82"} Feb 17 15:02:57.558730 master-0 kubenswrapper[8018]: I0217 15:02:57.558677 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-f25s7" event={"ID":"727f20b6-19c7-45eb-a803-6898ecaeffd0","Type":"ContainerStarted","Data":"08274ff4e69ed27a276b74fc224c475770680390425c3a58def665b09b0cb69d"} Feb 17 15:02:57.558730 master-0 kubenswrapper[8018]: I0217 15:02:57.558725 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-f25s7" event={"ID":"727f20b6-19c7-45eb-a803-6898ecaeffd0","Type":"ContainerStarted","Data":"ac3405a44e64442f5f84de1f2fe4affb9bf6727f46c3097b260717adce5a4719"} Feb 17 15:02:57.584016 master-0 kubenswrapper[8018]: I0217 15:02:57.578070 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86"} Feb 17 15:02:57.586580 master-0 kubenswrapper[8018]: I0217 15:02:57.586048 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:57.594866 master-0 kubenswrapper[8018]: I0217 15:02:57.594822 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.214592 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d"] Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: E0217 15:02:58.215039 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.215078 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: E0217 15:02:58.215088 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.215096 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.215156 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb8b6b9-b814-4451-98bb-dec174fbf936" containerName="prober" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.215172 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:02:58.217251 master-0 kubenswrapper[8018]: I0217 15:02:58.215596 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:02:58.220960 master-0 kubenswrapper[8018]: I0217 15:02:58.219128 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:02:58.220960 master-0 kubenswrapper[8018]: I0217 15:02:58.219232 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:02:58.240648 master-0 kubenswrapper[8018]: I0217 15:02:58.240572 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d"] Feb 17 15:02:58.269308 master-0 kubenswrapper[8018]: I0217 15:02:58.268267 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klfm5\" (UniqueName: \"kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5\") pod \"migrator-5bd989df77-hrl5d\" (UID: \"52b28595-f0fc-49e2-9c95-43e5f1eb003f\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:02:58.369756 master-0 kubenswrapper[8018]: I0217 15:02:58.369693 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klfm5\" (UniqueName: \"kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5\") pod \"migrator-5bd989df77-hrl5d\" (UID: \"52b28595-f0fc-49e2-9c95-43e5f1eb003f\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:02:58.410712 master-0 kubenswrapper[8018]: I0217 15:02:58.410655 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766"] Feb 17 15:02:58.413590 master-0 kubenswrapper[8018]: I0217 15:02:58.411201 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:02:58.438551 master-0 kubenswrapper[8018]: I0217 15:02:58.434421 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766"] Feb 17 15:02:58.447559 master-0 kubenswrapper[8018]: I0217 15:02:58.447191 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klfm5\" (UniqueName: \"kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5\") pod \"migrator-5bd989df77-hrl5d\" (UID: \"52b28595-f0fc-49e2-9c95-43e5f1eb003f\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:02:58.475570 master-0 kubenswrapper[8018]: I0217 15:02:58.470110 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbmb9\" (UniqueName: \"kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9\") pod \"csi-snapshot-controller-74b6595c6d-q4766\" (UID: \"129dba1e-73df-4ea4-96c0-3eba78d568ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:02:58.540789 master-0 kubenswrapper[8018]: I0217 15:02:58.540479 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:02:58.570998 master-0 kubenswrapper[8018]: I0217 15:02:58.570898 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbmb9\" (UniqueName: \"kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9\") pod \"csi-snapshot-controller-74b6595c6d-q4766\" (UID: \"129dba1e-73df-4ea4-96c0-3eba78d568ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:02:58.603883 master-0 kubenswrapper[8018]: I0217 15:02:58.603812 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9"} Feb 17 15:02:58.603883 master-0 kubenswrapper[8018]: I0217 15:02:58.603882 8018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:02:58.617100 master-0 kubenswrapper[8018]: I0217 15:02:58.617049 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbmb9\" (UniqueName: \"kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9\") pod \"csi-snapshot-controller-74b6595c6d-q4766\" (UID: \"129dba1e-73df-4ea4-96c0-3eba78d568ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:02:58.735992 master-0 kubenswrapper[8018]: I0217 15:02:58.735840 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:02:58.767639 master-0 kubenswrapper[8018]: I0217 15:02:58.767328 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d"] Feb 17 15:02:58.784905 master-0 kubenswrapper[8018]: W0217 15:02:58.784853 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b28595_f0fc_49e2_9c95_43e5f1eb003f.slice/crio-b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860 WatchSource:0}: Error finding container b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860: Status 404 returned error can't find the container with id b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860 Feb 17 15:02:58.918872 master-0 kubenswrapper[8018]: I0217 15:02:58.918039 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766"] Feb 17 15:02:59.090808 master-0 kubenswrapper[8018]: W0217 15:02:59.090682 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129dba1e_73df_4ea4_96c0_3eba78d568ba.slice/crio-bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f WatchSource:0}: Error finding container bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f: Status 404 returned error can't find the container with id bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f Feb 17 15:02:59.613258 master-0 kubenswrapper[8018]: I0217 15:02:59.613188 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f"} Feb 17 15:02:59.615148 master-0 kubenswrapper[8018]: I0217 15:02:59.615116 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" event={"ID":"52b28595-f0fc-49e2-9c95-43e5f1eb003f","Type":"ContainerStarted","Data":"b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860"} Feb 17 15:02:59.625646 master-0 kubenswrapper[8018]: I0217 15:02:59.624873 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-qjjb5"] Feb 17 15:02:59.625646 master-0 kubenswrapper[8018]: I0217 15:02:59.625293 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.627871 master-0 kubenswrapper[8018]: I0217 15:02:59.627750 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:02:59.629041 master-0 kubenswrapper[8018]: I0217 15:02:59.628644 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:02:59.629041 master-0 kubenswrapper[8018]: I0217 15:02:59.628683 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:02:59.629041 master-0 kubenswrapper[8018]: I0217 15:02:59.628846 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:02:59.629041 master-0 kubenswrapper[8018]: I0217 15:02:59.628931 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:02:59.629041 master-0 kubenswrapper[8018]: I0217 15:02:59.629011 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:02:59.684992 master-0 kubenswrapper[8018]: I0217 15:02:59.684917 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-qjjb5"] Feb 17 15:02:59.692490 master-0 kubenswrapper[8018]: I0217 15:02:59.692357 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.692689 master-0 kubenswrapper[8018]: I0217 15:02:59.692519 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.692689 master-0 kubenswrapper[8018]: I0217 15:02:59.692573 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.692689 master-0 kubenswrapper[8018]: I0217 15:02:59.692625 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92wc\" (UniqueName: \"kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.692689 master-0 kubenswrapper[8018]: I0217 15:02:59.692660 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.793875 master-0 kubenswrapper[8018]: I0217 15:02:59.793824 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92wc\" (UniqueName: \"kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.793875 master-0 kubenswrapper[8018]: I0217 15:02:59.793872 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.794099 master-0 kubenswrapper[8018]: I0217 15:02:59.793923 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.794099 master-0 kubenswrapper[8018]: I0217 15:02:59.794093 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.794182 master-0 kubenswrapper[8018]: I0217 15:02:59.794123 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:02:59.794182 master-0 kubenswrapper[8018]: E0217 15:02:59.794120 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 17 15:02:59.794298 master-0 kubenswrapper[8018]: E0217 15:02:59.794262 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:02:59.794354 master-0 kubenswrapper[8018]: E0217 15:02:59.794340 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:02:59.794387 master-0 kubenswrapper[8018]: E0217 15:02:59.794273 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:00.294228132 +0000 UTC m=+13.046571182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "config" not found Feb 17 15:02:59.794387 master-0 kubenswrapper[8018]: E0217 15:02:59.794384 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:00.294367245 +0000 UTC m=+13.046710295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : secret "serving-cert" not found Feb 17 15:02:59.794447 master-0 kubenswrapper[8018]: E0217 15:02:59.794395 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:00.294389996 +0000 UTC m=+13.046733046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "client-ca" not found Feb 17 15:02:59.794447 master-0 kubenswrapper[8018]: E0217 15:02:59.794421 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 17 15:02:59.794447 master-0 kubenswrapper[8018]: E0217 15:02:59.794439 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:00.294433077 +0000 UTC m=+13.046776127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "openshift-global-ca" not found Feb 17 15:02:59.817659 master-0 kubenswrapper[8018]: I0217 15:02:59.817617 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92wc\" (UniqueName: \"kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:00.301061 master-0 kubenswrapper[8018]: I0217 15:03:00.300994 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:00.301283 master-0 kubenswrapper[8018]: I0217 15:03:00.301081 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:00.301283 master-0 kubenswrapper[8018]: E0217 15:03:00.301177 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 17 15:03:00.301283 master-0 kubenswrapper[8018]: E0217 15:03:00.301186 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:00.301283 master-0 kubenswrapper[8018]: E0217 15:03:00.301225 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.301209527 +0000 UTC m=+14.053552577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "openshift-global-ca" not found Feb 17 15:03:00.301283 master-0 kubenswrapper[8018]: E0217 15:03:00.301259 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.301240587 +0000 UTC m=+14.053583637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : secret "serving-cert" not found Feb 17 15:03:00.301551 master-0 kubenswrapper[8018]: I0217 15:03:00.301315 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:00.301551 master-0 kubenswrapper[8018]: I0217 15:03:00.301411 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:00.301551 master-0 kubenswrapper[8018]: E0217 15:03:00.301523 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 17 15:03:00.301665 master-0 kubenswrapper[8018]: E0217 15:03:00.301583 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.301572875 +0000 UTC m=+14.053915925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "config" not found Feb 17 15:03:00.301665 master-0 kubenswrapper[8018]: E0217 15:03:00.301611 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:00.301665 master-0 kubenswrapper[8018]: E0217 15:03:00.301628 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.301622996 +0000 UTC m=+14.053966046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "client-ca" not found Feb 17 15:03:00.620936 master-0 kubenswrapper[8018]: I0217 15:03:00.620860 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-v2h9q" event={"ID":"632fa4c3-b717-432c-8c5f-8d809f69c48b","Type":"ContainerStarted","Data":"8cc23e797d3236d24762e36e827851e06cb26897932f790155e4441afa84ccf0"} Feb 17 15:03:00.811976 master-0 kubenswrapper[8018]: I0217 15:03:00.811910 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-qjjb5"] Feb 17 15:03:00.812238 master-0 kubenswrapper[8018]: E0217 15:03:00.812204 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" podUID="932cc504-fe4e-4a76-b201-fcb4fd6df73b" Feb 17 15:03:00.839492 master-0 kubenswrapper[8018]: I0217 15:03:00.838601 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6"] Feb 17 15:03:00.839492 master-0 kubenswrapper[8018]: I0217 15:03:00.839043 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:00.841083 master-0 kubenswrapper[8018]: I0217 15:03:00.841061 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:03:00.841279 master-0 kubenswrapper[8018]: I0217 15:03:00.841264 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:03:00.841419 master-0 kubenswrapper[8018]: I0217 15:03:00.841377 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:03:00.841543 master-0 kubenswrapper[8018]: I0217 15:03:00.841527 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:03:00.841641 master-0 kubenswrapper[8018]: I0217 15:03:00.841625 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:03:00.887607 master-0 kubenswrapper[8018]: I0217 15:03:00.887436 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6"] Feb 17 15:03:00.890102 master-0 kubenswrapper[8018]: I0217 15:03:00.888560 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:00.908378 master-0 kubenswrapper[8018]: I0217 15:03:00.908306 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:00.918485 master-0 kubenswrapper[8018]: I0217 15:03:00.918409 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:00.918993 master-0 kubenswrapper[8018]: I0217 15:03:00.918931 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:00.918993 master-0 kubenswrapper[8018]: I0217 15:03:00.918982 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:00.919165 master-0 kubenswrapper[8018]: I0217 15:03:00.919005 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7zfq\" (UniqueName: \"kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.020627 master-0 kubenswrapper[8018]: I0217 15:03:01.020558 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.020627 master-0 kubenswrapper[8018]: I0217 15:03:01.020615 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.020857 master-0 kubenswrapper[8018]: I0217 15:03:01.020638 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7zfq\" (UniqueName: \"kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.020857 master-0 kubenswrapper[8018]: I0217 15:03:01.020708 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.020918 master-0 kubenswrapper[8018]: E0217 15:03:01.020882 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:01.020945 master-0 kubenswrapper[8018]: E0217 15:03:01.020937 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.520920751 +0000 UTC m=+14.273263811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:01.021237 master-0 kubenswrapper[8018]: E0217 15:03:01.021173 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:01.021237 master-0 kubenswrapper[8018]: E0217 15:03:01.021211 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:01.521200118 +0000 UTC m=+14.273543188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:01.022199 master-0 kubenswrapper[8018]: I0217 15:03:01.022167 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.085055 master-0 kubenswrapper[8018]: I0217 15:03:01.084986 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7zfq\" (UniqueName: \"kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.118288 master-0 kubenswrapper[8018]: I0217 15:03:01.118221 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-bfm5s"] Feb 17 15:03:01.122019 master-0 kubenswrapper[8018]: I0217 15:03:01.118778 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.122939 master-0 kubenswrapper[8018]: I0217 15:03:01.122895 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:03:01.123027 master-0 kubenswrapper[8018]: I0217 15:03:01.123015 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:03:01.123157 master-0 kubenswrapper[8018]: I0217 15:03:01.123128 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:03:01.123338 master-0 kubenswrapper[8018]: I0217 15:03:01.123296 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:03:01.131974 master-0 kubenswrapper[8018]: I0217 15:03:01.131937 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-bfm5s"] Feb 17 15:03:01.223362 master-0 kubenswrapper[8018]: I0217 15:03:01.223231 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.223362 master-0 kubenswrapper[8018]: I0217 15:03:01.223304 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.223580 master-0 kubenswrapper[8018]: I0217 15:03:01.223445 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswxb\" (UniqueName: \"kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.325350 master-0 kubenswrapper[8018]: I0217 15:03:01.324933 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.325350 master-0 kubenswrapper[8018]: I0217 15:03:01.325172 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.325350 master-0 kubenswrapper[8018]: E0217 15:03:01.325288 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:01.325350 master-0 kubenswrapper[8018]: E0217 15:03:01.325364 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:03.325345199 +0000 UTC m=+16.077688249 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : configmap "client-ca" not found Feb 17 15:03:01.325806 master-0 kubenswrapper[8018]: I0217 15:03:01.325765 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.325869 master-0 kubenswrapper[8018]: I0217 15:03:01.325836 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.325943 master-0 kubenswrapper[8018]: I0217 15:03:01.325925 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswxb\" (UniqueName: \"kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.326326 master-0 kubenswrapper[8018]: E0217 15:03:01.326285 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:01.326478 master-0 kubenswrapper[8018]: E0217 15:03:01.326425 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert podName:932cc504-fe4e-4a76-b201-fcb4fd6df73b nodeName:}" failed. No retries permitted until 2026-02-17 15:03:03.326395655 +0000 UTC m=+16.078738705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert") pod "controller-manager-dc99ff586-qjjb5" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b") : secret "serving-cert" not found Feb 17 15:03:01.326725 master-0 kubenswrapper[8018]: I0217 15:03:01.326678 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.327708 master-0 kubenswrapper[8018]: I0217 15:03:01.327049 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.328032 master-0 kubenswrapper[8018]: I0217 15:03:01.327997 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.329097 master-0 kubenswrapper[8018]: I0217 15:03:01.328616 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.330067 master-0 kubenswrapper[8018]: I0217 15:03:01.330016 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-qjjb5\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.332874 master-0 kubenswrapper[8018]: I0217 15:03:01.332136 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.346204 master-0 kubenswrapper[8018]: I0217 15:03:01.346152 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswxb\" (UniqueName: \"kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.444647 master-0 kubenswrapper[8018]: I0217 15:03:01.444268 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:03:01.510231 master-0 kubenswrapper[8018]: I0217 15:03:01.509896 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:03:01.529015 master-0 kubenswrapper[8018]: I0217 15:03:01.528590 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.529015 master-0 kubenswrapper[8018]: E0217 15:03:01.528759 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:01.529015 master-0 kubenswrapper[8018]: E0217 15:03:01.528875 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:02.528846639 +0000 UTC m=+15.281189769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:01.529015 master-0 kubenswrapper[8018]: I0217 15:03:01.528993 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:01.529389 master-0 kubenswrapper[8018]: E0217 15:03:01.529198 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:01.529389 master-0 kubenswrapper[8018]: E0217 15:03:01.529278 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:02.529256238 +0000 UTC m=+15.281599298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:01.560313 master-0 kubenswrapper[8018]: I0217 15:03:01.560249 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:01.591367 master-0 kubenswrapper[8018]: I0217 15:03:01.591315 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:01.624816 master-0 kubenswrapper[8018]: I0217 15:03:01.624720 8018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:03:01.625345 master-0 kubenswrapper[8018]: I0217 15:03:01.624851 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.633799 master-0 kubenswrapper[8018]: I0217 15:03:01.633672 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:01.732558 master-0 kubenswrapper[8018]: I0217 15:03:01.731397 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") pod \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " Feb 17 15:03:01.732558 master-0 kubenswrapper[8018]: I0217 15:03:01.731531 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") pod \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " Feb 17 15:03:01.732558 master-0 kubenswrapper[8018]: I0217 15:03:01.731693 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g92wc\" (UniqueName: \"kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc\") pod \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\" (UID: \"932cc504-fe4e-4a76-b201-fcb4fd6df73b\") " Feb 17 15:03:01.740163 master-0 kubenswrapper[8018]: I0217 15:03:01.740076 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config" (OuterVolumeSpecName: "config") pod "932cc504-fe4e-4a76-b201-fcb4fd6df73b" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:01.741362 master-0 kubenswrapper[8018]: I0217 15:03:01.741306 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "932cc504-fe4e-4a76-b201-fcb4fd6df73b" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:01.748292 master-0 kubenswrapper[8018]: I0217 15:03:01.748223 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc" (OuterVolumeSpecName: "kube-api-access-g92wc") pod "932cc504-fe4e-4a76-b201-fcb4fd6df73b" (UID: "932cc504-fe4e-4a76-b201-fcb4fd6df73b"). InnerVolumeSpecName "kube-api-access-g92wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:01.835664 master-0 kubenswrapper[8018]: I0217 15:03:01.835519 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:01.835664 master-0 kubenswrapper[8018]: I0217 15:03:01.835569 8018 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:01.835664 master-0 kubenswrapper[8018]: I0217 15:03:01.835590 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g92wc\" (UniqueName: \"kubernetes.io/projected/932cc504-fe4e-4a76-b201-fcb4fd6df73b-kube-api-access-g92wc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: I0217 15:03:02.561588 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: I0217 15:03:02.562522 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: E0217 15:03:02.561808 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: E0217 15:03:02.562752 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.562734856 +0000 UTC m=+17.315077906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: E0217 15:03:02.562699 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:02.563777 master-0 kubenswrapper[8018]: E0217 15:03:02.562783 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.562775117 +0000 UTC m=+17.315118167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:02.630398 master-0 kubenswrapper[8018]: I0217 15:03:02.630347 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-qjjb5" Feb 17 15:03:02.642181 master-0 kubenswrapper[8018]: I0217 15:03:02.631618 8018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:03:02.682444 master-0 kubenswrapper[8018]: I0217 15:03:02.682144 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-qjjb5"] Feb 17 15:03:02.690379 master-0 kubenswrapper[8018]: I0217 15:03:02.690339 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-qjjb5"] Feb 17 15:03:02.765429 master-0 kubenswrapper[8018]: I0217 15:03:02.765193 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932cc504-fe4e-4a76-b201-fcb4fd6df73b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:02.765429 master-0 kubenswrapper[8018]: I0217 15:03:02.765226 8018 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932cc504-fe4e-4a76-b201-fcb4fd6df73b-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:03.163832 master-0 kubenswrapper[8018]: I0217 15:03:03.163756 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-bfm5s"] Feb 17 15:03:03.444057 master-0 kubenswrapper[8018]: I0217 15:03:03.444012 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="932cc504-fe4e-4a76-b201-fcb4fd6df73b" path="/var/lib/kubelet/pods/932cc504-fe4e-4a76-b201-fcb4fd6df73b/volumes" Feb 17 15:03:03.634940 master-0 kubenswrapper[8018]: I0217 15:03:03.634885 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerStarted","Data":"0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5"} Feb 17 15:03:03.634940 master-0 kubenswrapper[8018]: I0217 15:03:03.634940 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerStarted","Data":"3de92b39f5eed6fb2072489b003ac88b141cc4450863a8a84bd84754c9097e8a"} Feb 17 15:03:03.636851 master-0 kubenswrapper[8018]: I0217 15:03:03.636815 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" event={"ID":"52b28595-f0fc-49e2-9c95-43e5f1eb003f","Type":"ContainerStarted","Data":"428ca76c5e18447c3ad367ca78f0b6952a4c523c970a0f18285dd35ed4b0aca1"} Feb 17 15:03:03.637228 master-0 kubenswrapper[8018]: I0217 15:03:03.636853 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" event={"ID":"52b28595-f0fc-49e2-9c95-43e5f1eb003f","Type":"ContainerStarted","Data":"4d35dcda9830b92e2a68715e053be7ae7ad7e689cc774beac7574df65e2da582"} Feb 17 15:03:03.638616 master-0 kubenswrapper[8018]: I0217 15:03:03.638589 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"49fb045b32e2f71ec7c2565d556ca4beff6373bd7b27c95db6da3102666e0048"} Feb 17 15:03:03.638761 master-0 kubenswrapper[8018]: I0217 15:03:03.638734 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:03.640101 master-0 kubenswrapper[8018]: I0217 15:03:03.640066 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"99addda3858d20caa2954c52d0e4203716a8b098e6c6d5e147015e80f102e5a9"} Feb 17 15:03:03.641803 master-0 kubenswrapper[8018]: I0217 15:03:03.641746 8018 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="e00b7f9ba119fe3dfcee010018caac115fb3546638de62f638b07484db483416" exitCode=0 Feb 17 15:03:03.641885 master-0 kubenswrapper[8018]: I0217 15:03:03.641804 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"e00b7f9ba119fe3dfcee010018caac115fb3546638de62f638b07484db483416"} Feb 17 15:03:03.658362 master-0 kubenswrapper[8018]: I0217 15:03:03.658286 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" podStartSLOduration=2.6582684 podStartE2EDuration="2.6582684s" podCreationTimestamp="2026-02-17 15:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:03.656606061 +0000 UTC m=+16.408949121" watchObservedRunningTime="2026-02-17 15:03:03.6582684 +0000 UTC m=+16.410611450" Feb 17 15:03:03.686538 master-0 kubenswrapper[8018]: I0217 15:03:03.686476 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podStartSLOduration=2.0974703630000002 podStartE2EDuration="5.686442857s" podCreationTimestamp="2026-02-17 15:02:58 +0000 UTC" firstStartedPulling="2026-02-17 15:02:59.092589541 +0000 UTC m=+11.844932591" lastFinishedPulling="2026-02-17 15:03:02.681562015 +0000 UTC m=+15.433905085" observedRunningTime="2026-02-17 15:03:03.684772536 +0000 UTC m=+16.437115596" watchObservedRunningTime="2026-02-17 15:03:03.686442857 +0000 UTC m=+16.438785907" Feb 17 15:03:03.708242 master-0 kubenswrapper[8018]: I0217 15:03:03.708110 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" podStartSLOduration=1.931687978 podStartE2EDuration="5.708082605s" podCreationTimestamp="2026-02-17 15:02:58 +0000 UTC" firstStartedPulling="2026-02-17 15:02:58.786442631 +0000 UTC m=+11.538785681" lastFinishedPulling="2026-02-17 15:03:02.562837258 +0000 UTC m=+15.315180308" observedRunningTime="2026-02-17 15:03:03.705293428 +0000 UTC m=+16.457636488" watchObservedRunningTime="2026-02-17 15:03:03.708082605 +0000 UTC m=+16.460425685" Feb 17 15:03:03.962500 master-0 kubenswrapper[8018]: I0217 15:03:03.962361 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b"] Feb 17 15:03:03.963153 master-0 kubenswrapper[8018]: I0217 15:03:03.963119 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:03.965550 master-0 kubenswrapper[8018]: I0217 15:03:03.965520 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:03:03.965689 master-0 kubenswrapper[8018]: I0217 15:03:03.965671 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:03:03.965731 master-0 kubenswrapper[8018]: I0217 15:03:03.965686 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:03:03.965876 master-0 kubenswrapper[8018]: I0217 15:03:03.965833 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:03:03.965909 master-0 kubenswrapper[8018]: I0217 15:03:03.965856 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:03:03.971939 master-0 kubenswrapper[8018]: I0217 15:03:03.971909 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b"] Feb 17 15:03:03.985880 master-0 kubenswrapper[8018]: I0217 15:03:03.985589 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:03:04.083332 master-0 kubenswrapper[8018]: I0217 15:03:04.083254 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.083613 master-0 kubenswrapper[8018]: I0217 15:03:04.083553 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.083725 master-0 kubenswrapper[8018]: I0217 15:03:04.083703 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.083826 master-0 kubenswrapper[8018]: I0217 15:03:04.083790 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkjf\" (UniqueName: \"kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.083965 master-0 kubenswrapper[8018]: I0217 15:03:04.083926 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.184904 master-0 kubenswrapper[8018]: I0217 15:03:04.184802 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:03:04.184904 master-0 kubenswrapper[8018]: I0217 15:03:04.184876 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.185171 master-0 kubenswrapper[8018]: E0217 15:03:04.185027 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 17 15:03:04.185171 master-0 kubenswrapper[8018]: E0217 15:03:04.185108 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert podName:33e819b0-5a3f-4c2d-9dc7-8b0231804cdb nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.185086881 +0000 UTC m=+32.937429941 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-t7n5b" (UID: "33e819b0-5a3f-4c2d-9dc7-8b0231804cdb") : secret "package-server-manager-serving-cert" not found Feb 17 15:03:04.185389 master-0 kubenswrapper[8018]: E0217 15:03:04.185330 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 17 15:03:04.185512 master-0 kubenswrapper[8018]: E0217 15:03:04.185484 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert podName:08e27254-e906-484a-b346-036f898be3ae nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.185425219 +0000 UTC m=+32.937768309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert") pod "catalog-operator-588944557d-kjh2v" (UID: "08e27254-e906-484a-b346-036f898be3ae") : secret "catalog-operator-serving-cert" not found Feb 17 15:03:04.186365 master-0 kubenswrapper[8018]: I0217 15:03:04.186312 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.186441 master-0 kubenswrapper[8018]: I0217 15:03:04.186393 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:04.186605 master-0 kubenswrapper[8018]: I0217 15:03:04.186446 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:03:04.186605 master-0 kubenswrapper[8018]: I0217 15:03:04.186506 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:04.186605 master-0 kubenswrapper[8018]: I0217 15:03:04.186546 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.186605 master-0 kubenswrapper[8018]: I0217 15:03:04.186572 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:03:04.186605 master-0 kubenswrapper[8018]: I0217 15:03:04.186594 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:03:04.186802 master-0 kubenswrapper[8018]: I0217 15:03:04.186616 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:04.186802 master-0 kubenswrapper[8018]: I0217 15:03:04.186639 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:04.186802 master-0 kubenswrapper[8018]: I0217 15:03:04.186663 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.186802 master-0 kubenswrapper[8018]: I0217 15:03:04.186694 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:03:04.187013 master-0 kubenswrapper[8018]: E0217 15:03:04.186853 8018 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 17 15:03:04.187013 master-0 kubenswrapper[8018]: E0217 15:03:04.186919 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert podName:257db04b-7203-4a1d-b3d4-bd4db258a3cc nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.186899664 +0000 UTC m=+32.939242754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert") pod "olm-operator-6b56bd877c-tk8xm" (UID: "257db04b-7203-4a1d-b3d4-bd4db258a3cc") : secret "olm-operator-serving-cert" not found Feb 17 15:03:04.187013 master-0 kubenswrapper[8018]: E0217 15:03:04.186996 8018 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 17 15:03:04.187159 master-0 kubenswrapper[8018]: E0217 15:03:04.187039 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs podName:6b25a72d-965f-415c-abc9-09612859e9e0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.187022957 +0000 UTC m=+32.939366047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs") pod "multus-admission-controller-7c64d55f8-fzfsp" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0") : secret "multus-admission-controller-secret" not found Feb 17 15:03:04.187159 master-0 kubenswrapper[8018]: E0217 15:03:04.187097 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 17 15:03:04.187159 master-0 kubenswrapper[8018]: E0217 15:03:04.187138 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.18712375 +0000 UTC m=+32.939466840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "performance-addon-operator-webhook-cert" not found Feb 17 15:03:04.187313 master-0 kubenswrapper[8018]: E0217 15:03:04.187255 8018 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 17 15:03:04.187359 master-0 kubenswrapper[8018]: E0217 15:03:04.187334 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls podName:bf74b8c3-a5a6-4fb9-9d12-3a47c759f699 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.187310054 +0000 UTC m=+32.939653144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ddgs9" (UID: "bf74b8c3-a5a6-4fb9-9d12-3a47c759f699") : secret "cluster-monitoring-operator-tls" not found Feb 17 15:03:04.187487 master-0 kubenswrapper[8018]: E0217 15:03:04.187428 8018 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 15:03:04.187572 master-0 kubenswrapper[8018]: E0217 15:03:04.187542 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs podName:fce9579e-7383-421e-95dd-8f8b786817f9 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.187519639 +0000 UTC m=+32.939862779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs") pod "network-metrics-daemon-bnllz" (UID: "fce9579e-7383-421e-95dd-8f8b786817f9") : secret "metrics-daemon-secret" not found Feb 17 15:03:04.187655 master-0 kubenswrapper[8018]: I0217 15:03:04.187617 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zkjf\" (UniqueName: \"kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.187729 master-0 kubenswrapper[8018]: I0217 15:03:04.187694 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:03:04.187825 master-0 kubenswrapper[8018]: I0217 15:03:04.187792 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.187890 master-0 kubenswrapper[8018]: I0217 15:03:04.187806 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:04.187942 master-0 kubenswrapper[8018]: I0217 15:03:04.187886 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:03:04.188005 master-0 kubenswrapper[8018]: I0217 15:03:04.187950 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.188084 master-0 kubenswrapper[8018]: I0217 15:03:04.188051 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:03:04.188242 master-0 kubenswrapper[8018]: E0217 15:03:04.188200 8018 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 17 15:03:04.188311 master-0 kubenswrapper[8018]: E0217 15:03:04.188258 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert podName:4be2df82-c77a-4d26-9498-fa3beea54b81 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188244077 +0000 UTC m=+32.940587147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert") pod "cluster-version-operator-76959b6567-v49tq" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81") : secret "cluster-version-operator-serving-cert" not found Feb 17 15:03:04.188311 master-0 kubenswrapper[8018]: E0217 15:03:04.188268 8018 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:03:04.188311 master-0 kubenswrapper[8018]: E0217 15:03:04.188304 8018 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188316 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls podName:fc76384d-b288-4d30-bc77-f696b62a5f30 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188300748 +0000 UTC m=+32.940643838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls") pod "dns-operator-86b8869b79-lmqrr" (UID: "fc76384d-b288-4d30-bc77-f696b62a5f30") : secret "metrics-tls" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188342 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls podName:071566ae-a9ae-4aa9-9dc3-38602363be72 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188330039 +0000 UTC m=+32.940673129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-k8xp8" (UID: "071566ae-a9ae-4aa9-9dc3-38602363be72") : secret "node-tuning-operator-tls" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188361 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188386 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.68837778 +0000 UTC m=+17.440720840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : configmap "client-ca" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188212 8018 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188407 8018 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188422 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls podName:22a30079-d7fc-49cf-882e-1c5022cb5bf6 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188411691 +0000 UTC m=+32.940754851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls") pod "ingress-operator-c588d8cb4-nclxg" (UID: "22a30079-d7fc-49cf-882e-1c5022cb5bf6") : secret "metrics-tls" not found Feb 17 15:03:04.188473 master-0 kubenswrapper[8018]: E0217 15:03:04.188445 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics podName:c6d23570-21d6-4b08-83fc-8b0827c25313 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188433591 +0000 UTC m=+32.940776681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-wqxmh" (UID: "c6d23570-21d6-4b08-83fc-8b0827c25313") : secret "marketplace-operator-metrics" not found Feb 17 15:03:04.188770 master-0 kubenswrapper[8018]: E0217 15:03:04.188499 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:04.188770 master-0 kubenswrapper[8018]: E0217 15:03:04.188536 8018 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 17 15:03:04.188770 master-0 kubenswrapper[8018]: E0217 15:03:04.188545 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:04.688533103 +0000 UTC m=+17.440876193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : secret "serving-cert" not found Feb 17 15:03:04.188770 master-0 kubenswrapper[8018]: E0217 15:03:04.188566 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls podName:187af679-a062-4f41-81f2-33545f76febf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:20.188556804 +0000 UTC m=+32.940899864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-dtwmd" (UID: "187af679-a062-4f41-81f2-33545f76febf") : secret "image-registry-operator-tls" not found Feb 17 15:03:04.225250 master-0 kubenswrapper[8018]: I0217 15:03:04.225104 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zkjf\" (UniqueName: \"kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.593290 master-0 kubenswrapper[8018]: I0217 15:03:04.593153 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:04.593516 master-0 kubenswrapper[8018]: E0217 15:03:04.593348 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:04.593516 master-0 kubenswrapper[8018]: I0217 15:03:04.593439 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:04.593596 master-0 kubenswrapper[8018]: E0217 15:03:04.593540 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:08.593510412 +0000 UTC m=+21.345853502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:04.593646 master-0 kubenswrapper[8018]: E0217 15:03:04.593614 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:04.593688 master-0 kubenswrapper[8018]: E0217 15:03:04.593668 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:08.593652816 +0000 UTC m=+21.345995866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:04.694084 master-0 kubenswrapper[8018]: I0217 15:03:04.694020 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.694891 master-0 kubenswrapper[8018]: I0217 15:03:04.694148 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:04.694891 master-0 kubenswrapper[8018]: E0217 15:03:04.694273 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:04.694891 master-0 kubenswrapper[8018]: E0217 15:03:04.694326 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:05.694308969 +0000 UTC m=+18.446652019 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : configmap "client-ca" not found Feb 17 15:03:04.694891 master-0 kubenswrapper[8018]: E0217 15:03:04.694620 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:04.694891 master-0 kubenswrapper[8018]: E0217 15:03:04.694703 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:05.694676978 +0000 UTC m=+18.447020068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : secret "serving-cert" not found Feb 17 15:03:05.709803 master-0 kubenswrapper[8018]: I0217 15:03:05.709433 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:05.710432 master-0 kubenswrapper[8018]: I0217 15:03:05.709885 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:05.710432 master-0 kubenswrapper[8018]: E0217 15:03:05.709601 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:05.710432 master-0 kubenswrapper[8018]: E0217 15:03:05.710317 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:07.710285907 +0000 UTC m=+20.462628957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : configmap "client-ca" not found Feb 17 15:03:05.710599 master-0 kubenswrapper[8018]: E0217 15:03:05.710240 8018 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:05.710599 master-0 kubenswrapper[8018]: E0217 15:03:05.710588 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:07.710555653 +0000 UTC m=+20.462898703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : secret "serving-cert" not found Feb 17 15:03:06.390640 master-0 kubenswrapper[8018]: I0217 15:03:06.387851 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:06.390640 master-0 kubenswrapper[8018]: I0217 15:03:06.390094 8018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:03:06.413846 master-0 kubenswrapper[8018]: I0217 15:03:06.413794 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:03:06.678747 master-0 kubenswrapper[8018]: I0217 15:03:06.678608 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"ab1f920a647980800ae08efae1274805a32af351c37c8743a9d7313eb1fca48b"} Feb 17 15:03:07.681767 master-0 kubenswrapper[8018]: I0217 15:03:07.681493 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/0.log" Feb 17 15:03:07.682413 master-0 kubenswrapper[8018]: I0217 15:03:07.682368 8018 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="ab1f920a647980800ae08efae1274805a32af351c37c8743a9d7313eb1fca48b" exitCode=255 Feb 17 15:03:07.682413 master-0 kubenswrapper[8018]: I0217 15:03:07.682402 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"ab1f920a647980800ae08efae1274805a32af351c37c8743a9d7313eb1fca48b"} Feb 17 15:03:07.682762 master-0 kubenswrapper[8018]: I0217 15:03:07.682721 8018 scope.go:117] "RemoveContainer" containerID="ab1f920a647980800ae08efae1274805a32af351c37c8743a9d7313eb1fca48b" Feb 17 15:03:07.733117 master-0 kubenswrapper[8018]: I0217 15:03:07.733054 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:07.733315 master-0 kubenswrapper[8018]: E0217 15:03:07.733252 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:07.733350 master-0 kubenswrapper[8018]: I0217 15:03:07.733320 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:07.733412 master-0 kubenswrapper[8018]: E0217 15:03:07.733353 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:11.733323508 +0000 UTC m=+24.485666558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : configmap "client-ca" not found Feb 17 15:03:07.739050 master-0 kubenswrapper[8018]: I0217 15:03:07.739006 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:08.647730 master-0 kubenswrapper[8018]: I0217 15:03:08.647225 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:08.648008 master-0 kubenswrapper[8018]: E0217 15:03:08.647557 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:08.648061 master-0 kubenswrapper[8018]: E0217 15:03:08.648036 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:16.647996077 +0000 UTC m=+29.400339217 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:08.648194 master-0 kubenswrapper[8018]: I0217 15:03:08.648139 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:08.648406 master-0 kubenswrapper[8018]: E0217 15:03:08.648350 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:08.648534 master-0 kubenswrapper[8018]: E0217 15:03:08.648520 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:16.648451897 +0000 UTC m=+29.400795027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:08.690277 master-0 kubenswrapper[8018]: I0217 15:03:08.690218 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/0.log" Feb 17 15:03:08.691737 master-0 kubenswrapper[8018]: I0217 15:03:08.691690 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52"} Feb 17 15:03:08.807679 master-0 kubenswrapper[8018]: I0217 15:03:08.807614 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:11.798491 master-0 kubenswrapper[8018]: I0217 15:03:11.798374 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:11.799352 master-0 kubenswrapper[8018]: E0217 15:03:11.798795 8018 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:11.799352 master-0 kubenswrapper[8018]: E0217 15:03:11.798912 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca podName:c95187e2-33d4-4e80-b11c-b8a120808487 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:19.798880209 +0000 UTC m=+32.551223299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca") pod "controller-manager-6fcbb7f9bd-gdt9b" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487") : configmap "client-ca" not found Feb 17 15:03:14.718552 master-0 kubenswrapper[8018]: I0217 15:03:14.718446 8018 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="49fb045b32e2f71ec7c2565d556ca4beff6373bd7b27c95db6da3102666e0048" exitCode=0 Feb 17 15:03:14.718552 master-0 kubenswrapper[8018]: I0217 15:03:14.718516 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"49fb045b32e2f71ec7c2565d556ca4beff6373bd7b27c95db6da3102666e0048"} Feb 17 15:03:14.719349 master-0 kubenswrapper[8018]: I0217 15:03:14.719173 8018 scope.go:117] "RemoveContainer" containerID="49fb045b32e2f71ec7c2565d556ca4beff6373bd7b27c95db6da3102666e0048" Feb 17 15:03:14.800796 master-0 kubenswrapper[8018]: I0217 15:03:14.800404 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:14.800796 master-0 kubenswrapper[8018]: I0217 15:03:14.800803 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:15.209029 master-0 kubenswrapper[8018]: I0217 15:03:15.208971 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6578c4d554-6jl9n"] Feb 17 15:03:15.212206 master-0 kubenswrapper[8018]: I0217 15:03:15.209622 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.214740 master-0 kubenswrapper[8018]: I0217 15:03:15.214718 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:03:15.219326 master-0 kubenswrapper[8018]: I0217 15:03:15.219179 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 17 15:03:15.219516 master-0 kubenswrapper[8018]: I0217 15:03:15.219426 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:03:15.224152 master-0 kubenswrapper[8018]: I0217 15:03:15.222253 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:03:15.224152 master-0 kubenswrapper[8018]: I0217 15:03:15.222489 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:03:15.232231 master-0 kubenswrapper[8018]: I0217 15:03:15.232189 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:03:15.232502 master-0 kubenswrapper[8018]: I0217 15:03:15.232417 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:03:15.232634 master-0 kubenswrapper[8018]: I0217 15:03:15.232583 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:03:15.232806 master-0 kubenswrapper[8018]: I0217 15:03:15.232789 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:03:15.232998 master-0 kubenswrapper[8018]: I0217 15:03:15.232981 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 17 15:03:15.235050 master-0 kubenswrapper[8018]: I0217 15:03:15.235011 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6578c4d554-6jl9n"] Feb 17 15:03:15.350981 master-0 kubenswrapper[8018]: I0217 15:03:15.350820 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.350981 master-0 kubenswrapper[8018]: I0217 15:03:15.350861 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.350981 master-0 kubenswrapper[8018]: I0217 15:03:15.350884 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.350981 master-0 kubenswrapper[8018]: I0217 15:03:15.350936 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.350981 master-0 kubenswrapper[8018]: I0217 15:03:15.350953 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.350999 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.351028 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.351057 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.351073 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.351261 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn68r\" (UniqueName: \"kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.351382 master-0 kubenswrapper[8018]: I0217 15:03:15.351375 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.452705 master-0 kubenswrapper[8018]: I0217 15:03:15.452659 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.452915 master-0 kubenswrapper[8018]: I0217 15:03:15.452831 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.452915 master-0 kubenswrapper[8018]: I0217 15:03:15.452855 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.452915 master-0 kubenswrapper[8018]: I0217 15:03:15.452904 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453145 master-0 kubenswrapper[8018]: I0217 15:03:15.453095 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453230 master-0 kubenswrapper[8018]: I0217 15:03:15.453163 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453230 master-0 kubenswrapper[8018]: I0217 15:03:15.453215 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453315 master-0 kubenswrapper[8018]: I0217 15:03:15.453217 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453315 master-0 kubenswrapper[8018]: I0217 15:03:15.453298 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453397 master-0 kubenswrapper[8018]: E0217 15:03:15.453385 8018 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 17 15:03:15.453445 master-0 kubenswrapper[8018]: E0217 15:03:15.453430 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:15.953415584 +0000 UTC m=+28.705758634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : configmap "audit-0" not found Feb 17 15:03:15.453527 master-0 kubenswrapper[8018]: I0217 15:03:15.453444 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453527 master-0 kubenswrapper[8018]: I0217 15:03:15.453448 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453527 master-0 kubenswrapper[8018]: I0217 15:03:15.453480 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453643 master-0 kubenswrapper[8018]: I0217 15:03:15.453631 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453788 master-0 kubenswrapper[8018]: I0217 15:03:15.453759 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn68r\" (UniqueName: \"kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453855 master-0 kubenswrapper[8018]: I0217 15:03:15.453820 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.453995 master-0 kubenswrapper[8018]: E0217 15:03:15.453957 8018 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 17 15:03:15.454057 master-0 kubenswrapper[8018]: E0217 15:03:15.454043 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:15.954023338 +0000 UTC m=+28.706366388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : secret "serving-cert" not found Feb 17 15:03:15.454405 master-0 kubenswrapper[8018]: I0217 15:03:15.454376 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.455044 master-0 kubenswrapper[8018]: I0217 15:03:15.454989 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.461876 master-0 kubenswrapper[8018]: I0217 15:03:15.461842 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.462030 master-0 kubenswrapper[8018]: I0217 15:03:15.461960 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.478014 master-0 kubenswrapper[8018]: I0217 15:03:15.477975 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn68r\" (UniqueName: \"kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.689708 master-0 kubenswrapper[8018]: I0217 15:03:15.689612 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:15.690250 master-0 kubenswrapper[8018]: I0217 15:03:15.690202 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.692519 master-0 kubenswrapper[8018]: I0217 15:03:15.692472 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 17 15:03:15.703615 master-0 kubenswrapper[8018]: I0217 15:03:15.703553 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:15.724319 master-0 kubenswrapper[8018]: I0217 15:03:15.724236 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8"} Feb 17 15:03:15.725569 master-0 kubenswrapper[8018]: I0217 15:03:15.725539 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:15.857739 master-0 kubenswrapper[8018]: I0217 15:03:15.857698 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.857739 master-0 kubenswrapper[8018]: I0217 15:03:15.857762 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.857977 master-0 kubenswrapper[8018]: I0217 15:03:15.857779 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.958975 master-0 kubenswrapper[8018]: I0217 15:03:15.958788 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.958975 master-0 kubenswrapper[8018]: I0217 15:03:15.958853 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: I0217 15:03:15.959028 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: I0217 15:03:15.959096 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: E0217 15:03:15.959121 8018 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: I0217 15:03:15.959129 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: I0217 15:03:15.959206 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:15.959328 master-0 kubenswrapper[8018]: E0217 15:03:15.959219 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:16.95920166 +0000 UTC m=+29.711544720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : configmap "audit-0" not found Feb 17 15:03:15.959786 master-0 kubenswrapper[8018]: I0217 15:03:15.959488 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:15.959786 master-0 kubenswrapper[8018]: E0217 15:03:15.959613 8018 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 17 15:03:15.959786 master-0 kubenswrapper[8018]: E0217 15:03:15.959651 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:16.959640881 +0000 UTC m=+29.711983941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : secret "serving-cert" not found Feb 17 15:03:16.228507 master-0 kubenswrapper[8018]: I0217 15:03:16.228275 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:16.306689 master-0 kubenswrapper[8018]: I0217 15:03:16.306649 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:16.531701 master-0 kubenswrapper[8018]: I0217 15:03:16.531233 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:16.540280 master-0 kubenswrapper[8018]: W0217 15:03:16.540231 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2227cd78_2ca2_4a57_90cf_9bccb1a7fb96.slice/crio-52af3dfbfc5cbf5ff7b537f9dbc28ea77baac6fc88f6f51de7838f59c0f56ab1 WatchSource:0}: Error finding container 52af3dfbfc5cbf5ff7b537f9dbc28ea77baac6fc88f6f51de7838f59c0f56ab1: Status 404 returned error can't find the container with id 52af3dfbfc5cbf5ff7b537f9dbc28ea77baac6fc88f6f51de7838f59c0f56ab1 Feb 17 15:03:16.668817 master-0 kubenswrapper[8018]: I0217 15:03:16.668737 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:16.669097 master-0 kubenswrapper[8018]: E0217 15:03:16.668898 8018 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 17 15:03:16.669097 master-0 kubenswrapper[8018]: E0217 15:03:16.668970 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:32.668950866 +0000 UTC m=+45.421293916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : configmap "client-ca" not found Feb 17 15:03:16.669235 master-0 kubenswrapper[8018]: I0217 15:03:16.669125 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") pod \"route-controller-manager-69bd477586-66ml6\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:16.669364 master-0 kubenswrapper[8018]: E0217 15:03:16.669319 8018 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 17 15:03:16.669510 master-0 kubenswrapper[8018]: E0217 15:03:16.669443 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert podName:68ee4487-ad81-4dfa-92c7-e9160d756acf nodeName:}" failed. No retries permitted until 2026-02-17 15:03:32.669421097 +0000 UTC m=+45.421764157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert") pod "route-controller-manager-69bd477586-66ml6" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf") : secret "serving-cert" not found Feb 17 15:03:16.729875 master-0 kubenswrapper[8018]: I0217 15:03:16.729821 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96","Type":"ContainerStarted","Data":"52af3dfbfc5cbf5ff7b537f9dbc28ea77baac6fc88f6f51de7838f59c0f56ab1"} Feb 17 15:03:16.972694 master-0 kubenswrapper[8018]: I0217 15:03:16.972622 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:16.973008 master-0 kubenswrapper[8018]: E0217 15:03:16.972792 8018 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 17 15:03:16.973008 master-0 kubenswrapper[8018]: E0217 15:03:16.972942 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:18.972882423 +0000 UTC m=+31.725225513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : configmap "audit-0" not found Feb 17 15:03:16.973312 master-0 kubenswrapper[8018]: I0217 15:03:16.973231 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:16.973628 master-0 kubenswrapper[8018]: E0217 15:03:16.973565 8018 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 17 15:03:16.973956 master-0 kubenswrapper[8018]: E0217 15:03:16.973698 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:18.973659421 +0000 UTC m=+31.726002521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : secret "serving-cert" not found Feb 17 15:03:17.738805 master-0 kubenswrapper[8018]: I0217 15:03:17.738420 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96","Type":"ContainerStarted","Data":"ebd1e02590d930a55bd73b8292b9b1ea795c71f1b5084718d3a86a771e618ddd"} Feb 17 15:03:17.894518 master-0 kubenswrapper[8018]: I0217 15:03:17.894375 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.894347425 podStartE2EDuration="2.894347425s" podCreationTimestamp="2026-02-17 15:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:17.892351978 +0000 UTC m=+30.644695058" watchObservedRunningTime="2026-02-17 15:03:17.894347425 +0000 UTC m=+30.646690515" Feb 17 15:03:18.689825 master-0 kubenswrapper[8018]: I0217 15:03:18.689768 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6578c4d554-6jl9n"] Feb 17 15:03:18.690447 master-0 kubenswrapper[8018]: E0217 15:03:18.690408 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" podUID="8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" Feb 17 15:03:18.743751 master-0 kubenswrapper[8018]: I0217 15:03:18.743700 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:18.752344 master-0 kubenswrapper[8018]: I0217 15:03:18.752294 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:18.902193 master-0 kubenswrapper[8018]: I0217 15:03:18.902140 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902193 master-0 kubenswrapper[8018]: I0217 15:03:18.902200 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902252 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902275 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902304 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902328 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902353 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn68r\" (UniqueName: \"kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902378 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902416 master-0 kubenswrapper[8018]: I0217 15:03:18.902415 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:18.902709 master-0 kubenswrapper[8018]: I0217 15:03:18.902445 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:18.902709 master-0 kubenswrapper[8018]: I0217 15:03:18.902550 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:18.902709 master-0 kubenswrapper[8018]: I0217 15:03:18.902604 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:18.902709 master-0 kubenswrapper[8018]: I0217 15:03:18.902695 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:18.902922 master-0 kubenswrapper[8018]: I0217 15:03:18.902899 8018 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:18.902972 master-0 kubenswrapper[8018]: I0217 15:03:18.902924 8018 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:18.902972 master-0 kubenswrapper[8018]: I0217 15:03:18.902936 8018 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:18.902972 master-0 kubenswrapper[8018]: I0217 15:03:18.902948 8018 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:18.903104 master-0 kubenswrapper[8018]: I0217 15:03:18.902973 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:18.903404 master-0 kubenswrapper[8018]: I0217 15:03:18.903367 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config" (OuterVolumeSpecName: "config") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:18.907085 master-0 kubenswrapper[8018]: I0217 15:03:18.906985 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:18.907172 master-0 kubenswrapper[8018]: I0217 15:03:18.907091 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:18.907394 master-0 kubenswrapper[8018]: I0217 15:03:18.907361 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r" (OuterVolumeSpecName: "kube-api-access-vn68r") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "kube-api-access-vn68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:19.004603 master-0 kubenswrapper[8018]: I0217 15:03:19.004021 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:19.004861 master-0 kubenswrapper[8018]: E0217 15:03:19.004285 8018 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 17 15:03:19.004861 master-0 kubenswrapper[8018]: I0217 15:03:19.004799 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:19.005050 master-0 kubenswrapper[8018]: I0217 15:03:19.005002 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.005050 master-0 kubenswrapper[8018]: I0217 15:03:19.005034 8018 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.005248 master-0 kubenswrapper[8018]: I0217 15:03:19.005099 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn68r\" (UniqueName: \"kubernetes.io/projected/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-kube-api-access-vn68r\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.005248 master-0 kubenswrapper[8018]: I0217 15:03:19.005129 8018 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.005248 master-0 kubenswrapper[8018]: I0217 15:03:19.005157 8018 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.005550 master-0 kubenswrapper[8018]: E0217 15:03:19.005527 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit podName:8c0b71fc-bdfb-4266-8f6c-210e15f0ead0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:23.005488255 +0000 UTC m=+35.757831345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit") pod "apiserver-6578c4d554-6jl9n" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0") : configmap "audit-0" not found Feb 17 15:03:19.010400 master-0 kubenswrapper[8018]: I0217 15:03:19.010352 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"apiserver-6578c4d554-6jl9n\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:19.105857 master-0 kubenswrapper[8018]: I0217 15:03:19.105777 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") pod \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\" (UID: \"8c0b71fc-bdfb-4266-8f6c-210e15f0ead0\") " Feb 17 15:03:19.110229 master-0 kubenswrapper[8018]: I0217 15:03:19.110161 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" (UID: "8c0b71fc-bdfb-4266-8f6c-210e15f0ead0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:19.207717 master-0 kubenswrapper[8018]: I0217 15:03:19.207625 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.504305 master-0 kubenswrapper[8018]: I0217 15:03:19.504174 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b"] Feb 17 15:03:19.504782 master-0 kubenswrapper[8018]: E0217 15:03:19.504662 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" podUID="c95187e2-33d4-4e80-b11c-b8a120808487" Feb 17 15:03:19.748497 master-0 kubenswrapper[8018]: I0217 15:03:19.748284 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:19.748497 master-0 kubenswrapper[8018]: I0217 15:03:19.748301 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6578c4d554-6jl9n" Feb 17 15:03:19.764547 master-0 kubenswrapper[8018]: I0217 15:03:19.764469 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:19.781185 master-0 kubenswrapper[8018]: I0217 15:03:19.777145 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6"] Feb 17 15:03:19.781185 master-0 kubenswrapper[8018]: E0217 15:03:19.777532 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" podUID="68ee4487-ad81-4dfa-92c7-e9160d756acf" Feb 17 15:03:19.820081 master-0 kubenswrapper[8018]: I0217 15:03:19.820006 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:19.821084 master-0 kubenswrapper[8018]: I0217 15:03:19.821032 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"controller-manager-6fcbb7f9bd-gdt9b\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:19.903546 master-0 kubenswrapper[8018]: I0217 15:03:19.902617 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6578c4d554-6jl9n"] Feb 17 15:03:19.920889 master-0 kubenswrapper[8018]: I0217 15:03:19.920821 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zkjf\" (UniqueName: \"kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf\") pod \"c95187e2-33d4-4e80-b11c-b8a120808487\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " Feb 17 15:03:19.921100 master-0 kubenswrapper[8018]: I0217 15:03:19.920952 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles\") pod \"c95187e2-33d4-4e80-b11c-b8a120808487\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " Feb 17 15:03:19.921100 master-0 kubenswrapper[8018]: I0217 15:03:19.921017 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config\") pod \"c95187e2-33d4-4e80-b11c-b8a120808487\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " Feb 17 15:03:19.921229 master-0 kubenswrapper[8018]: I0217 15:03:19.921103 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") pod \"c95187e2-33d4-4e80-b11c-b8a120808487\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " Feb 17 15:03:19.921229 master-0 kubenswrapper[8018]: I0217 15:03:19.921185 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") pod \"c95187e2-33d4-4e80-b11c-b8a120808487\" (UID: \"c95187e2-33d4-4e80-b11c-b8a120808487\") " Feb 17 15:03:19.922184 master-0 kubenswrapper[8018]: I0217 15:03:19.922130 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c95187e2-33d4-4e80-b11c-b8a120808487" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:19.922362 master-0 kubenswrapper[8018]: I0217 15:03:19.922299 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca" (OuterVolumeSpecName: "client-ca") pod "c95187e2-33d4-4e80-b11c-b8a120808487" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:19.923371 master-0 kubenswrapper[8018]: I0217 15:03:19.923335 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config" (OuterVolumeSpecName: "config") pod "c95187e2-33d4-4e80-b11c-b8a120808487" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:19.924525 master-0 kubenswrapper[8018]: I0217 15:03:19.924182 8018 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.924525 master-0 kubenswrapper[8018]: I0217 15:03:19.924231 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.924525 master-0 kubenswrapper[8018]: I0217 15:03:19.924248 8018 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c95187e2-33d4-4e80-b11c-b8a120808487-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:19.925367 master-0 kubenswrapper[8018]: I0217 15:03:19.925164 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6bd884947c-tdlbn"] Feb 17 15:03:19.926674 master-0 kubenswrapper[8018]: I0217 15:03:19.926087 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:19.927664 master-0 kubenswrapper[8018]: I0217 15:03:19.927635 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c95187e2-33d4-4e80-b11c-b8a120808487" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:19.929547 master-0 kubenswrapper[8018]: I0217 15:03:19.928695 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf" (OuterVolumeSpecName: "kube-api-access-4zkjf") pod "c95187e2-33d4-4e80-b11c-b8a120808487" (UID: "c95187e2-33d4-4e80-b11c-b8a120808487"). InnerVolumeSpecName "kube-api-access-4zkjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:19.933375 master-0 kubenswrapper[8018]: I0217 15:03:19.933301 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:03:19.933844 master-0 kubenswrapper[8018]: I0217 15:03:19.933833 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:03:19.934153 master-0 kubenswrapper[8018]: I0217 15:03:19.934102 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:03:19.934438 master-0 kubenswrapper[8018]: I0217 15:03:19.934390 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:03:19.934749 master-0 kubenswrapper[8018]: I0217 15:03:19.934696 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:03:19.935226 master-0 kubenswrapper[8018]: I0217 15:03:19.935165 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:03:19.935436 master-0 kubenswrapper[8018]: I0217 15:03:19.935397 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:03:19.935782 master-0 kubenswrapper[8018]: I0217 15:03:19.935728 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:03:19.937039 master-0 kubenswrapper[8018]: I0217 15:03:19.936987 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:03:19.938137 master-0 kubenswrapper[8018]: I0217 15:03:19.938082 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-6578c4d554-6jl9n"] Feb 17 15:03:19.943570 master-0 kubenswrapper[8018]: I0217 15:03:19.943488 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:03:19.975115 master-0 kubenswrapper[8018]: I0217 15:03:19.975054 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6bd884947c-tdlbn"] Feb 17 15:03:20.025768 master-0 kubenswrapper[8018]: I0217 15:03:20.025548 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.025768 master-0 kubenswrapper[8018]: I0217 15:03:20.025747 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026009 master-0 kubenswrapper[8018]: I0217 15:03:20.025793 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026009 master-0 kubenswrapper[8018]: I0217 15:03:20.025893 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026009 master-0 kubenswrapper[8018]: I0217 15:03:20.025930 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026009 master-0 kubenswrapper[8018]: I0217 15:03:20.025961 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026037 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026059 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026100 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026140 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tcz\" (UniqueName: \"kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026174 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026244 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95187e2-33d4-4e80-b11c-b8a120808487-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:20.026684 master-0 kubenswrapper[8018]: I0217 15:03:20.026261 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zkjf\" (UniqueName: \"kubernetes.io/projected/c95187e2-33d4-4e80-b11c-b8a120808487-kube-api-access-4zkjf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:20.127204 master-0 kubenswrapper[8018]: I0217 15:03:20.127145 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.127204 master-0 kubenswrapper[8018]: I0217 15:03:20.127195 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.127499 master-0 kubenswrapper[8018]: I0217 15:03:20.127280 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.127704 master-0 kubenswrapper[8018]: I0217 15:03:20.127663 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.127776 master-0 kubenswrapper[8018]: I0217 15:03:20.127744 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.127938 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128001 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128016 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2tcz\" (UniqueName: \"kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128059 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128188 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128375 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128476 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128505 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128580 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.128620 master-0 kubenswrapper[8018]: I0217 15:03:20.128592 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.129049 master-0 kubenswrapper[8018]: I0217 15:03:20.128638 8018 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0-audit\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:20.129094 master-0 kubenswrapper[8018]: I0217 15:03:20.129034 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.129422 master-0 kubenswrapper[8018]: I0217 15:03:20.129388 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.130311 master-0 kubenswrapper[8018]: I0217 15:03:20.130266 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.130855 master-0 kubenswrapper[8018]: I0217 15:03:20.130617 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.131526 master-0 kubenswrapper[8018]: I0217 15:03:20.131490 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.132386 master-0 kubenswrapper[8018]: I0217 15:03:20.132340 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.229510 master-0 kubenswrapper[8018]: I0217 15:03:20.229398 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:03:20.229798 master-0 kubenswrapper[8018]: I0217 15:03:20.229576 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:20.229798 master-0 kubenswrapper[8018]: I0217 15:03:20.229642 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:03:20.229922 master-0 kubenswrapper[8018]: I0217 15:03:20.229826 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:20.229922 master-0 kubenswrapper[8018]: I0217 15:03:20.229897 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:03:20.230269 master-0 kubenswrapper[8018]: I0217 15:03:20.230188 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:03:20.230400 master-0 kubenswrapper[8018]: I0217 15:03:20.230306 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:20.230400 master-0 kubenswrapper[8018]: I0217 15:03:20.230368 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:20.230643 master-0 kubenswrapper[8018]: I0217 15:03:20.230499 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:03:20.230643 master-0 kubenswrapper[8018]: I0217 15:03:20.230577 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:03:20.230833 master-0 kubenswrapper[8018]: I0217 15:03:20.230678 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:20.230833 master-0 kubenswrapper[8018]: I0217 15:03:20.230736 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:03:20.231201 master-0 kubenswrapper[8018]: I0217 15:03:20.230990 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:03:20.238426 master-0 kubenswrapper[8018]: I0217 15:03:20.238199 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:20.239958 master-0 kubenswrapper[8018]: I0217 15:03:20.239735 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:03:20.240602 master-0 kubenswrapper[8018]: I0217 15:03:20.240303 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:03:20.241411 master-0 kubenswrapper[8018]: I0217 15:03:20.241215 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:20.246193 master-0 kubenswrapper[8018]: I0217 15:03:20.246126 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"cluster-version-operator-76959b6567-v49tq\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:03:20.246381 master-0 kubenswrapper[8018]: I0217 15:03:20.246210 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-fzfsp\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:03:20.246550 master-0 kubenswrapper[8018]: I0217 15:03:20.246520 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:03:20.246550 master-0 kubenswrapper[8018]: I0217 15:03:20.246535 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:20.246758 master-0 kubenswrapper[8018]: I0217 15:03:20.246616 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:20.247078 master-0 kubenswrapper[8018]: I0217 15:03:20.246977 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:03:20.247078 master-0 kubenswrapper[8018]: I0217 15:03:20.247006 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:03:20.247238 master-0 kubenswrapper[8018]: I0217 15:03:20.247126 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:03:20.247531 master-0 kubenswrapper[8018]: I0217 15:03:20.247487 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:20.367502 master-0 kubenswrapper[8018]: I0217 15:03:20.367419 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2tcz\" (UniqueName: \"kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.479377 master-0 kubenswrapper[8018]: I0217 15:03:20.479291 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm"] Feb 17 15:03:20.480331 master-0 kubenswrapper[8018]: I0217 15:03:20.480284 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.483086 master-0 kubenswrapper[8018]: I0217 15:03:20.483036 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 17 15:03:20.483341 master-0 kubenswrapper[8018]: I0217 15:03:20.483273 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 17 15:03:20.483662 master-0 kubenswrapper[8018]: I0217 15:03:20.483581 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 17 15:03:20.483870 master-0 kubenswrapper[8018]: I0217 15:03:20.483832 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 17 15:03:20.490301 master-0 kubenswrapper[8018]: I0217 15:03:20.490230 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:03:20.491330 master-0 kubenswrapper[8018]: I0217 15:03:20.491282 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:03:20.491668 master-0 kubenswrapper[8018]: I0217 15:03:20.491432 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:03:20.491668 master-0 kubenswrapper[8018]: I0217 15:03:20.491588 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:03:20.494154 master-0 kubenswrapper[8018]: I0217 15:03:20.492055 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:03:20.494154 master-0 kubenswrapper[8018]: I0217 15:03:20.493468 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:03:20.494154 master-0 kubenswrapper[8018]: I0217 15:03:20.493592 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:20.494870 master-0 kubenswrapper[8018]: I0217 15:03:20.494312 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:03:20.494870 master-0 kubenswrapper[8018]: I0217 15:03:20.494816 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:20.495240 master-0 kubenswrapper[8018]: I0217 15:03:20.494875 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:03:20.495240 master-0 kubenswrapper[8018]: I0217 15:03:20.495013 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:03:20.495240 master-0 kubenswrapper[8018]: I0217 15:03:20.495125 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:20.558724 master-0 kubenswrapper[8018]: I0217 15:03:20.558618 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm"] Feb 17 15:03:20.591295 master-0 kubenswrapper[8018]: I0217 15:03:20.590013 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:20.598960 master-0 kubenswrapper[8018]: I0217 15:03:20.597941 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 17 15:03:20.605837 master-0 kubenswrapper[8018]: I0217 15:03:20.605803 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.615364 master-0 kubenswrapper[8018]: I0217 15:03:20.614903 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 17 15:03:20.650048 master-0 kubenswrapper[8018]: I0217 15:03:20.649913 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwz4\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.650048 master-0 kubenswrapper[8018]: I0217 15:03:20.649985 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.650048 master-0 kubenswrapper[8018]: I0217 15:03:20.650012 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.650440 master-0 kubenswrapper[8018]: I0217 15:03:20.650058 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.650440 master-0 kubenswrapper[8018]: I0217 15:03:20.650081 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.650440 master-0 kubenswrapper[8018]: I0217 15:03:20.650111 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.675801 master-0 kubenswrapper[8018]: I0217 15:03:20.674553 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 17 15:03:20.675801 master-0 kubenswrapper[8018]: W0217 15:03:20.675774 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4be2df82_c77a_4d26_9498_fa3beea54b81.slice/crio-7353f5bcae82d0fc43f2cb4200ebc6c45650c202a8783735da86e6a55c164a80 WatchSource:0}: Error finding container 7353f5bcae82d0fc43f2cb4200ebc6c45650c202a8783735da86e6a55c164a80: Status 404 returned error can't find the container with id 7353f5bcae82d0fc43f2cb4200ebc6c45650c202a8783735da86e6a55c164a80 Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.750937 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lwz4\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.750983 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.751035 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.751053 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.751088 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.751109 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.751168 master-0 kubenswrapper[8018]: I0217 15:03:20.751128 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.751918 master-0 kubenswrapper[8018]: E0217 15:03:20.751317 8018 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Feb 17 15:03:20.751918 master-0 kubenswrapper[8018]: E0217 15:03:20.751366 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs podName:68954d1e-2147-4465-9817-a3c04cbc19b0 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:21.251350591 +0000 UTC m=+34.003693641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-jdfsm" (UID: "68954d1e-2147-4465-9817-a3c04cbc19b0") : secret "catalogserver-cert" not found Feb 17 15:03:20.752878 master-0 kubenswrapper[8018]: I0217 15:03:20.752016 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.752878 master-0 kubenswrapper[8018]: I0217 15:03:20.752099 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.752878 master-0 kubenswrapper[8018]: I0217 15:03:20.752113 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.752878 master-0 kubenswrapper[8018]: I0217 15:03:20.752129 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.752878 master-0 kubenswrapper[8018]: I0217 15:03:20.752251 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.754666 master-0 kubenswrapper[8018]: I0217 15:03:20.754071 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" event={"ID":"4be2df82-c77a-4d26-9498-fa3beea54b81","Type":"ContainerStarted","Data":"7353f5bcae82d0fc43f2cb4200ebc6c45650c202a8783735da86e6a55c164a80"} Feb 17 15:03:20.754666 master-0 kubenswrapper[8018]: I0217 15:03:20.754103 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:20.754666 master-0 kubenswrapper[8018]: I0217 15:03:20.754499 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b" Feb 17 15:03:20.768090 master-0 kubenswrapper[8018]: I0217 15:03:20.768033 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.768186 master-0 kubenswrapper[8018]: I0217 15:03:20.768090 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lwz4\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:20.787029 master-0 kubenswrapper[8018]: I0217 15:03:20.786987 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:20.810501 master-0 kubenswrapper[8018]: I0217 15:03:20.809897 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:03:20.853828 master-0 kubenswrapper[8018]: I0217 15:03:20.853778 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b"] Feb 17 15:03:20.854319 master-0 kubenswrapper[8018]: I0217 15:03:20.854280 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.854405 master-0 kubenswrapper[8018]: I0217 15:03:20.854382 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.854542 master-0 kubenswrapper[8018]: I0217 15:03:20.854498 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.854679 master-0 kubenswrapper[8018]: I0217 15:03:20.854645 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.854833 master-0 kubenswrapper[8018]: I0217 15:03:20.854805 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.873720 master-0 kubenswrapper[8018]: I0217 15:03:20.873629 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b"] Feb 17 15:03:20.897193 master-0 kubenswrapper[8018]: I0217 15:03:20.897103 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:20.957137 master-0 kubenswrapper[8018]: I0217 15:03:20.957088 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zfq\" (UniqueName: \"kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq\") pod \"68ee4487-ad81-4dfa-92c7-e9160d756acf\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " Feb 17 15:03:20.957415 master-0 kubenswrapper[8018]: I0217 15:03:20.957220 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config\") pod \"68ee4487-ad81-4dfa-92c7-e9160d756acf\" (UID: \"68ee4487-ad81-4dfa-92c7-e9160d756acf\") " Feb 17 15:03:20.959487 master-0 kubenswrapper[8018]: I0217 15:03:20.959364 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config" (OuterVolumeSpecName: "config") pod "68ee4487-ad81-4dfa-92c7-e9160d756acf" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:20.963732 master-0 kubenswrapper[8018]: I0217 15:03:20.963691 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq" (OuterVolumeSpecName: "kube-api-access-x7zfq") pod "68ee4487-ad81-4dfa-92c7-e9160d756acf" (UID: "68ee4487-ad81-4dfa-92c7-e9160d756acf"). InnerVolumeSpecName "kube-api-access-x7zfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:20.967113 master-0 kubenswrapper[8018]: I0217 15:03:20.966906 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.041858 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.042397 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.044323 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.044575 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.044984 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.045317 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:03:21.046552 master-0 kubenswrapper[8018]: I0217 15:03:21.045475 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:03:21.058773 master-0 kubenswrapper[8018]: I0217 15:03:21.058740 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zfq\" (UniqueName: \"kubernetes.io/projected/68ee4487-ad81-4dfa-92c7-e9160d756acf-kube-api-access-x7zfq\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:21.058773 master-0 kubenswrapper[8018]: I0217 15:03:21.058770 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:21.060619 master-0 kubenswrapper[8018]: I0217 15:03:21.060569 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:03:21.129735 master-0 kubenswrapper[8018]: I0217 15:03:21.129025 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:21.160227 master-0 kubenswrapper[8018]: I0217 15:03:21.160072 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.160227 master-0 kubenswrapper[8018]: I0217 15:03:21.160165 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.160227 master-0 kubenswrapper[8018]: I0217 15:03:21.160226 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.160500 master-0 kubenswrapper[8018]: I0217 15:03:21.160246 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxh4f\" (UniqueName: \"kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.160500 master-0 kubenswrapper[8018]: I0217 15:03:21.160282 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.257540 master-0 kubenswrapper[8018]: I0217 15:03:21.257340 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bnllz"] Feb 17 15:03:21.257540 master-0 kubenswrapper[8018]: I0217 15:03:21.257381 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8"] Feb 17 15:03:21.257540 master-0 kubenswrapper[8018]: I0217 15:03:21.257393 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd"] Feb 17 15:03:21.260922 master-0 kubenswrapper[8018]: I0217 15:03:21.260874 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6bd884947c-tdlbn"] Feb 17 15:03:21.263702 master-0 kubenswrapper[8018]: I0217 15:03:21.263667 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.263702 master-0 kubenswrapper[8018]: I0217 15:03:21.263703 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.263804 master-0 kubenswrapper[8018]: I0217 15:03:21.263735 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.263804 master-0 kubenswrapper[8018]: I0217 15:03:21.263753 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxh4f\" (UniqueName: \"kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.263804 master-0 kubenswrapper[8018]: I0217 15:03:21.263778 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.263804 master-0 kubenswrapper[8018]: I0217 15:03:21.263801 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:21.264086 master-0 kubenswrapper[8018]: I0217 15:03:21.264057 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-lmqrr"] Feb 17 15:03:21.264869 master-0 kubenswrapper[8018]: I0217 15:03:21.264837 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.265073 master-0 kubenswrapper[8018]: I0217 15:03:21.265043 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.265789 master-0 kubenswrapper[8018]: I0217 15:03:21.265758 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.267377 master-0 kubenswrapper[8018]: I0217 15:03:21.267342 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:21.269683 master-0 kubenswrapper[8018]: I0217 15:03:21.269638 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.416811 master-0 kubenswrapper[8018]: I0217 15:03:21.416635 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:21.447903 master-0 kubenswrapper[8018]: I0217 15:03:21.447786 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0b71fc-bdfb-4266-8f6c-210e15f0ead0" path="/var/lib/kubelet/pods/8c0b71fc-bdfb-4266-8f6c-210e15f0ead0/volumes" Feb 17 15:03:21.448336 master-0 kubenswrapper[8018]: I0217 15:03:21.448282 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95187e2-33d4-4e80-b11c-b8a120808487" path="/var/lib/kubelet/pods/c95187e2-33d4-4e80-b11c-b8a120808487/volumes" Feb 17 15:03:21.556616 master-0 kubenswrapper[8018]: I0217 15:03:21.556550 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v"] Feb 17 15:03:21.560123 master-0 kubenswrapper[8018]: I0217 15:03:21.559373 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:03:21.570755 master-0 kubenswrapper[8018]: I0217 15:03:21.563300 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 17 15:03:21.570755 master-0 kubenswrapper[8018]: I0217 15:03:21.563363 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh"] Feb 17 15:03:21.576072 master-0 kubenswrapper[8018]: I0217 15:03:21.575677 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm"] Feb 17 15:03:21.579771 master-0 kubenswrapper[8018]: I0217 15:03:21.578905 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b"] Feb 17 15:03:21.580899 master-0 kubenswrapper[8018]: I0217 15:03:21.580814 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxh4f\" (UniqueName: \"kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f\") pod \"controller-manager-67d67c799d-b9bj6\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.582124 master-0 kubenswrapper[8018]: I0217 15:03:21.582036 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg"] Feb 17 15:03:21.584813 master-0 kubenswrapper[8018]: I0217 15:03:21.584367 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9"] Feb 17 15:03:21.613287 master-0 kubenswrapper[8018]: W0217 15:03:21.613237 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22a30079_d7fc_49cf_882e_1c5022cb5bf6.slice/crio-e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825 WatchSource:0}: Error finding container e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825: Status 404 returned error can't find the container with id e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825 Feb 17 15:03:21.614601 master-0 kubenswrapper[8018]: W0217 15:03:21.614576 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b25a72d_965f_415c_abc9_09612859e9e0.slice/crio-cf71d0b2feed9834ef8b72ab6dd9daecd0f98a4f5152569a06e215023a03601e WatchSource:0}: Error finding container cf71d0b2feed9834ef8b72ab6dd9daecd0f98a4f5152569a06e215023a03601e: Status 404 returned error can't find the container with id cf71d0b2feed9834ef8b72ab6dd9daecd0f98a4f5152569a06e215023a03601e Feb 17 15:03:21.615037 master-0 kubenswrapper[8018]: W0217 15:03:21.615012 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e27254_e906_484a_b346_036f898be3ae.slice/crio-5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e WatchSource:0}: Error finding container 5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e: Status 404 returned error can't find the container with id 5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e Feb 17 15:03:21.623900 master-0 kubenswrapper[8018]: W0217 15:03:21.621937 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5de71cc1_08c3_4295_ac86_745c9d4fbb46.slice/crio-0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7 WatchSource:0}: Error finding container 0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7: Status 404 returned error can't find the container with id 0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7 Feb 17 15:03:21.673732 master-0 kubenswrapper[8018]: I0217 15:03:21.673645 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:21.759009 master-0 kubenswrapper[8018]: I0217 15:03:21.758958 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825"} Feb 17 15:03:21.761054 master-0 kubenswrapper[8018]: I0217 15:03:21.759808 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bnllz" event={"ID":"fce9579e-7383-421e-95dd-8f8b786817f9","Type":"ContainerStarted","Data":"298673e77b46ac4f7d905ff32814664148ad0db661cddcaaee10cf189d3684c5"} Feb 17 15:03:21.762792 master-0 kubenswrapper[8018]: I0217 15:03:21.761593 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" event={"ID":"fc76384d-b288-4d30-bc77-f696b62a5f30","Type":"ContainerStarted","Data":"88069f4ccbdf201c4be62b11d0e703527a7a79f09f40906dc3a787d78261c8ef"} Feb 17 15:03:21.762792 master-0 kubenswrapper[8018]: I0217 15:03:21.762598 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" event={"ID":"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699","Type":"ContainerStarted","Data":"906174604cb39234c29ce4879ec0f4d93014bdd017a01d3e85d6c19518222596"} Feb 17 15:03:21.763923 master-0 kubenswrapper[8018]: I0217 15:03:21.763849 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" event={"ID":"1d481a79-f565-4c7f-84cc-207fc3117c23","Type":"ContainerStarted","Data":"722d47350d1c81810576142df11eff4e518dcde59f93678f428ad5eb7002bb4a"} Feb 17 15:03:21.766076 master-0 kubenswrapper[8018]: I0217 15:03:21.766024 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" event={"ID":"257db04b-7203-4a1d-b3d4-bd4db258a3cc","Type":"ContainerStarted","Data":"a3a77a00a966d03623fbb6190f7a54610fa74ee604fa29802c44b60a21f260b9"} Feb 17 15:03:21.768008 master-0 kubenswrapper[8018]: I0217 15:03:21.767984 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" event={"ID":"08e27254-e906-484a-b346-036f898be3ae","Type":"ContainerStarted","Data":"5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e"} Feb 17 15:03:21.770604 master-0 kubenswrapper[8018]: I0217 15:03:21.770547 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerStarted","Data":"a681cbc579a95de476c193412db5500c7b6a259702d2ab059c0ee35c97e7da06"} Feb 17 15:03:21.771625 master-0 kubenswrapper[8018]: I0217 15:03:21.771599 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerStarted","Data":"cf71d0b2feed9834ef8b72ab6dd9daecd0f98a4f5152569a06e215023a03601e"} Feb 17 15:03:21.772681 master-0 kubenswrapper[8018]: I0217 15:03:21.772650 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerStarted","Data":"6968fe4893506f2c7eff240b0f99304a06f7947186a1a85995eef13747cf455c"} Feb 17 15:03:21.774137 master-0 kubenswrapper[8018]: I0217 15:03:21.774084 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerStarted","Data":"4ae9c7ad8143a0b1cfbbc04f9419df3b288d0c3ef1448b00390641786802dac4"} Feb 17 15:03:21.775258 master-0 kubenswrapper[8018]: I0217 15:03:21.775210 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5de71cc1-08c3-4295-ac86-745c9d4fbb46","Type":"ContainerStarted","Data":"0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7"} Feb 17 15:03:21.776340 master-0 kubenswrapper[8018]: I0217 15:03:21.776256 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6" Feb 17 15:03:21.776340 master-0 kubenswrapper[8018]: I0217 15:03:21.776255 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerStarted","Data":"57edd3b523cd1b85d285ca94528fb2e1279d3c9bd1b74461a1727888cc91ac92"} Feb 17 15:03:22.305092 master-0 kubenswrapper[8018]: I0217 15:03:22.304271 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm"] Feb 17 15:03:22.326959 master-0 kubenswrapper[8018]: I0217 15:03:22.325301 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:22.340579 master-0 kubenswrapper[8018]: W0217 15:03:22.340503 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3c02d7_1607_4305_9380_ba8fc6018b60.slice/crio-e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6 WatchSource:0}: Error finding container e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6: Status 404 returned error can't find the container with id e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6 Feb 17 15:03:22.788079 master-0 kubenswrapper[8018]: I0217 15:03:22.787961 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerStarted","Data":"2f9ba4a97ac9cd770106a84a1110df7d7052e82e019a496ab8462fc28fcb14fd"} Feb 17 15:03:22.790040 master-0 kubenswrapper[8018]: I0217 15:03:22.789994 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerStarted","Data":"2a479ec38bdea4cbd4a7adb238b324911587886dcefb4e0f842be74e764e51d4"} Feb 17 15:03:22.790040 master-0 kubenswrapper[8018]: I0217 15:03:22.790035 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerStarted","Data":"086d9bb4b9a7ac8b6af3cbff40a452b0f16d3de1089172ce89af2a258294dacf"} Feb 17 15:03:22.791786 master-0 kubenswrapper[8018]: I0217 15:03:22.791734 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5de71cc1-08c3-4295-ac86-745c9d4fbb46","Type":"ContainerStarted","Data":"107e3fd578a275c186183eec1ef31542c82377b88843f3c540b45cab25720060"} Feb 17 15:03:22.792953 master-0 kubenswrapper[8018]: I0217 15:03:22.792928 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" event={"ID":"ec3c02d7-1607-4305-9380-ba8fc6018b60","Type":"ContainerStarted","Data":"e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6"} Feb 17 15:03:23.079539 master-0 kubenswrapper[8018]: I0217 15:03:23.074546 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6"] Feb 17 15:03:23.124797 master-0 kubenswrapper[8018]: I0217 15:03:23.124735 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6"] Feb 17 15:03:23.188957 master-0 kubenswrapper[8018]: I0217 15:03:23.188892 8018 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68ee4487-ad81-4dfa-92c7-e9160d756acf-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:23.188957 master-0 kubenswrapper[8018]: I0217 15:03:23.188929 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ee4487-ad81-4dfa-92c7-e9160d756acf-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:23.271419 master-0 kubenswrapper[8018]: I0217 15:03:23.271270 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=3.271245794 podStartE2EDuration="3.271245794s" podCreationTimestamp="2026-02-17 15:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:23.1876472 +0000 UTC m=+35.939990250" watchObservedRunningTime="2026-02-17 15:03:23.271245794 +0000 UTC m=+36.023588884" Feb 17 15:03:23.276432 master-0 kubenswrapper[8018]: I0217 15:03:23.276387 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls"] Feb 17 15:03:23.277637 master-0 kubenswrapper[8018]: I0217 15:03:23.277609 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.279610 master-0 kubenswrapper[8018]: I0217 15:03:23.279566 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 17 15:03:23.279898 master-0 kubenswrapper[8018]: I0217 15:03:23.279867 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 17 15:03:23.280625 master-0 kubenswrapper[8018]: I0217 15:03:23.280409 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 17 15:03:23.317363 master-0 kubenswrapper[8018]: I0217 15:03:23.317128 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls"] Feb 17 15:03:23.392147 master-0 kubenswrapper[8018]: I0217 15:03:23.392117 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.392241 master-0 kubenswrapper[8018]: I0217 15:03:23.392159 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.392241 master-0 kubenswrapper[8018]: I0217 15:03:23.392181 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.392241 master-0 kubenswrapper[8018]: I0217 15:03:23.392198 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g48f\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.392433 master-0 kubenswrapper[8018]: I0217 15:03:23.392247 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.447209 master-0 kubenswrapper[8018]: I0217 15:03:23.447165 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ee4487-ad81-4dfa-92c7-e9160d756acf" path="/var/lib/kubelet/pods/68ee4487-ad81-4dfa-92c7-e9160d756acf/volumes" Feb 17 15:03:23.493375 master-0 kubenswrapper[8018]: I0217 15:03:23.493328 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493375 master-0 kubenswrapper[8018]: I0217 15:03:23.493374 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493644 master-0 kubenswrapper[8018]: I0217 15:03:23.493397 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493644 master-0 kubenswrapper[8018]: I0217 15:03:23.493415 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493644 master-0 kubenswrapper[8018]: I0217 15:03:23.493432 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g48f\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493799 master-0 kubenswrapper[8018]: I0217 15:03:23.493753 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493893 master-0 kubenswrapper[8018]: I0217 15:03:23.493874 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.493960 master-0 kubenswrapper[8018]: E0217 15:03:23.493941 8018 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:23.493999 master-0 kubenswrapper[8018]: E0217 15:03:23.493967 8018 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls: configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:23.494027 master-0 kubenswrapper[8018]: E0217 15:03:23.494014 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs podName:50c51fe2-32aa-430f-8da0-7cf3b9519131 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:23.993995854 +0000 UTC m=+36.746338904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs") pod "operator-controller-controller-manager-85c9b89969-4n2ls" (UID: "50c51fe2-32aa-430f-8da0-7cf3b9519131") : configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:23.494430 master-0 kubenswrapper[8018]: I0217 15:03:23.494388 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.698250 master-0 kubenswrapper[8018]: I0217 15:03:23.698208 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g48f\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:23.799680 master-0 kubenswrapper[8018]: I0217 15:03:23.799639 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerStarted","Data":"e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4"} Feb 17 15:03:23.800567 master-0 kubenswrapper[8018]: I0217 15:03:23.799793 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:24.000512 master-0 kubenswrapper[8018]: I0217 15:03:24.000334 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:24.000710 master-0 kubenswrapper[8018]: E0217 15:03:24.000608 8018 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:24.000710 master-0 kubenswrapper[8018]: E0217 15:03:24.000670 8018 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls: configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:24.000948 master-0 kubenswrapper[8018]: E0217 15:03:24.000903 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs podName:50c51fe2-32aa-430f-8da0-7cf3b9519131 nodeName:}" failed. No retries permitted until 2026-02-17 15:03:25.000863916 +0000 UTC m=+37.753207026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs") pod "operator-controller-controller-manager-85c9b89969-4n2ls" (UID: "50c51fe2-32aa-430f-8da0-7cf3b9519131") : configmap "operator-controller-trusted-ca-bundle" not found Feb 17 15:03:24.030177 master-0 kubenswrapper[8018]: I0217 15:03:24.028932 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:24.030177 master-0 kubenswrapper[8018]: I0217 15:03:24.029469 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.032831 master-0 kubenswrapper[8018]: I0217 15:03:24.032627 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:03:24.032896 master-0 kubenswrapper[8018]: I0217 15:03:24.032838 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:03:24.033024 master-0 kubenswrapper[8018]: I0217 15:03:24.032995 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:03:24.036760 master-0 kubenswrapper[8018]: I0217 15:03:24.036321 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:03:24.038710 master-0 kubenswrapper[8018]: I0217 15:03:24.038666 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:03:24.101722 master-0 kubenswrapper[8018]: I0217 15:03:24.101637 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.101722 master-0 kubenswrapper[8018]: I0217 15:03:24.101741 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.102115 master-0 kubenswrapper[8018]: I0217 15:03:24.101776 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrdq\" (UniqueName: \"kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.102115 master-0 kubenswrapper[8018]: I0217 15:03:24.101810 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.202664 master-0 kubenswrapper[8018]: I0217 15:03:24.202623 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrdq\" (UniqueName: \"kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.202828 master-0 kubenswrapper[8018]: I0217 15:03:24.202674 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.202828 master-0 kubenswrapper[8018]: I0217 15:03:24.202699 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.202828 master-0 kubenswrapper[8018]: I0217 15:03:24.202749 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.203550 master-0 kubenswrapper[8018]: I0217 15:03:24.203525 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.204729 master-0 kubenswrapper[8018]: I0217 15:03:24.204702 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.207435 master-0 kubenswrapper[8018]: I0217 15:03:24.207410 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.235878 master-0 kubenswrapper[8018]: I0217 15:03:24.235334 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:24.238077 master-0 kubenswrapper[8018]: I0217 15:03:24.238026 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podStartSLOduration=4.238005062 podStartE2EDuration="4.238005062s" podCreationTimestamp="2026-02-17 15:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:24.226244219 +0000 UTC m=+36.978587289" watchObservedRunningTime="2026-02-17 15:03:24.238005062 +0000 UTC m=+36.990348112" Feb 17 15:03:24.528507 master-0 kubenswrapper[8018]: I0217 15:03:24.528450 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrdq\" (UniqueName: \"kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq\") pod \"route-controller-manager-6965bd7478-x8mdg\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:24.675629 master-0 kubenswrapper[8018]: I0217 15:03:24.673585 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:25.026259 master-0 kubenswrapper[8018]: I0217 15:03:25.026212 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:25.045106 master-0 kubenswrapper[8018]: I0217 15:03:25.030301 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:25.094870 master-0 kubenswrapper[8018]: I0217 15:03:25.094823 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:25.499252 master-0 kubenswrapper[8018]: I0217 15:03:25.499205 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:25.499664 master-0 kubenswrapper[8018]: I0217 15:03:25.499402 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" containerName="installer" containerID="cri-o://ebd1e02590d930a55bd73b8292b9b1ea795c71f1b5084718d3a86a771e618ddd" gracePeriod=30 Feb 17 15:03:25.667778 master-0 kubenswrapper[8018]: I0217 15:03:25.667646 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-865765995-c58rq"] Feb 17 15:03:25.668742 master-0 kubenswrapper[8018]: I0217 15:03:25.668723 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671427 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671538 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671641 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.672187 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671899 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671919 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:03:25.672300 master-0 kubenswrapper[8018]: I0217 15:03:25.671923 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:03:25.677004 master-0 kubenswrapper[8018]: I0217 15:03:25.672333 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:03:25.715400 master-0 kubenswrapper[8018]: I0217 15:03:25.715349 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-865765995-c58rq"] Feb 17 15:03:25.744749 master-0 kubenswrapper[8018]: I0217 15:03:25.744683 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.744965 master-0 kubenswrapper[8018]: I0217 15:03:25.744842 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp42\" (UniqueName: \"kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.744965 master-0 kubenswrapper[8018]: I0217 15:03:25.744912 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.745051 master-0 kubenswrapper[8018]: I0217 15:03:25.744982 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.745051 master-0 kubenswrapper[8018]: I0217 15:03:25.745016 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.745051 master-0 kubenswrapper[8018]: I0217 15:03:25.745037 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.745168 master-0 kubenswrapper[8018]: I0217 15:03:25.745062 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.745168 master-0 kubenswrapper[8018]: I0217 15:03:25.745083 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846120 master-0 kubenswrapper[8018]: I0217 15:03:25.845996 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846120 master-0 kubenswrapper[8018]: I0217 15:03:25.846046 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846120 master-0 kubenswrapper[8018]: I0217 15:03:25.846074 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846120 master-0 kubenswrapper[8018]: I0217 15:03:25.846102 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmp42\" (UniqueName: \"kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846120 master-0 kubenswrapper[8018]: I0217 15:03:25.846120 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846476 master-0 kubenswrapper[8018]: I0217 15:03:25.846141 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846476 master-0 kubenswrapper[8018]: I0217 15:03:25.846160 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.846476 master-0 kubenswrapper[8018]: I0217 15:03:25.846176 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.847246 master-0 kubenswrapper[8018]: I0217 15:03:25.847030 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.847246 master-0 kubenswrapper[8018]: I0217 15:03:25.847146 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.847638 master-0 kubenswrapper[8018]: I0217 15:03:25.847521 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.847696 master-0 kubenswrapper[8018]: I0217 15:03:25.847655 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.850514 master-0 kubenswrapper[8018]: I0217 15:03:25.850097 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.850514 master-0 kubenswrapper[8018]: I0217 15:03:25.850149 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.855843 master-0 kubenswrapper[8018]: I0217 15:03:25.855711 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.868139 master-0 kubenswrapper[8018]: I0217 15:03:25.868087 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmp42\" (UniqueName: \"kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:25.994604 master-0 kubenswrapper[8018]: I0217 15:03:25.994542 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:27.971513 master-0 kubenswrapper[8018]: I0217 15:03:27.966255 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:27.971513 master-0 kubenswrapper[8018]: I0217 15:03:27.967815 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:27.977041 master-0 kubenswrapper[8018]: I0217 15:03:27.976982 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:28.114036 master-0 kubenswrapper[8018]: I0217 15:03:28.113964 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.114250 master-0 kubenswrapper[8018]: I0217 15:03:28.114056 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.114250 master-0 kubenswrapper[8018]: I0217 15:03:28.114099 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.215785 master-0 kubenswrapper[8018]: I0217 15:03:28.215706 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.215785 master-0 kubenswrapper[8018]: I0217 15:03:28.215779 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.216041 master-0 kubenswrapper[8018]: I0217 15:03:28.215863 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.216041 master-0 kubenswrapper[8018]: I0217 15:03:28.215944 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.216041 master-0 kubenswrapper[8018]: I0217 15:03:28.216019 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.268636 master-0 kubenswrapper[8018]: I0217 15:03:28.268107 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access\") pod \"installer-2-master-0\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:28.301299 master-0 kubenswrapper[8018]: I0217 15:03:28.301220 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:31.421008 master-0 kubenswrapper[8018]: I0217 15:03:31.420961 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:03:31.512839 master-0 kubenswrapper[8018]: I0217 15:03:31.512801 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:03:33.795555 master-0 kubenswrapper[8018]: I0217 15:03:33.794925 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:03:33.796162 master-0 kubenswrapper[8018]: I0217 15:03:33.795575 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.797214 master-0 kubenswrapper[8018]: I0217 15:03:33.797176 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:03:33.804920 master-0 kubenswrapper[8018]: I0217 15:03:33.804867 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:03:33.886242 master-0 kubenswrapper[8018]: I0217 15:03:33.886173 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.886426 master-0 kubenswrapper[8018]: I0217 15:03:33.886263 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.886426 master-0 kubenswrapper[8018]: I0217 15:03:33.886298 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.986666 master-0 kubenswrapper[8018]: I0217 15:03:33.986598 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.986666 master-0 kubenswrapper[8018]: I0217 15:03:33.986648 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.986969 master-0 kubenswrapper[8018]: I0217 15:03:33.986699 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.986969 master-0 kubenswrapper[8018]: I0217 15:03:33.986759 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:33.986969 master-0 kubenswrapper[8018]: I0217 15:03:33.986794 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:34.010835 master-0 kubenswrapper[8018]: I0217 15:03:34.010752 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:34.119416 master-0 kubenswrapper[8018]: I0217 15:03:34.119334 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:03:35.830646 master-0 kubenswrapper[8018]: I0217 15:03:35.823580 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 17 15:03:35.830646 master-0 kubenswrapper[8018]: I0217 15:03:35.824681 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:35.830646 master-0 kubenswrapper[8018]: I0217 15:03:35.830376 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:03:35.834091 master-0 kubenswrapper[8018]: I0217 15:03:35.834044 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 17 15:03:35.909035 master-0 kubenswrapper[8018]: I0217 15:03:35.908959 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:35.909035 master-0 kubenswrapper[8018]: I0217 15:03:35.909037 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:35.909247 master-0 kubenswrapper[8018]: I0217 15:03:35.909066 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.010004 master-0 kubenswrapper[8018]: I0217 15:03:36.009748 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.010004 master-0 kubenswrapper[8018]: I0217 15:03:36.009850 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.010685 master-0 kubenswrapper[8018]: I0217 15:03:36.010147 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.010839 master-0 kubenswrapper[8018]: I0217 15:03:36.010789 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.011031 master-0 kubenswrapper[8018]: I0217 15:03:36.010900 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.026859 master-0 kubenswrapper[8018]: I0217 15:03:36.026298 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.088057 master-0 kubenswrapper[8018]: I0217 15:03:36.087933 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:36.153930 master-0 kubenswrapper[8018]: I0217 15:03:36.153839 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:03:36.681227 master-0 kubenswrapper[8018]: I0217 15:03:36.680335 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:36.762476 master-0 kubenswrapper[8018]: I0217 15:03:36.762408 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 17 15:03:36.765269 master-0 kubenswrapper[8018]: W0217 15:03:36.763773 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod830d120b_fcb6_47ca_a3a0_aa82dc8a3874.slice/crio-dc7327dc26530c61d96a947638b6ac6c4897aa3d1e3fc71a4f60fd72d1c69c0d WatchSource:0}: Error finding container dc7327dc26530c61d96a947638b6ac6c4897aa3d1e3fc71a4f60fd72d1c69c0d: Status 404 returned error can't find the container with id dc7327dc26530c61d96a947638b6ac6c4897aa3d1e3fc71a4f60fd72d1c69c0d Feb 17 15:03:36.772100 master-0 kubenswrapper[8018]: I0217 15:03:36.771969 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls"] Feb 17 15:03:36.778225 master-0 kubenswrapper[8018]: W0217 15:03:36.778169 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod580b240a_a806_454d_ab19_8f193a8d9ca2.slice/crio-cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb WatchSource:0}: Error finding container cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb: Status 404 returned error can't find the container with id cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb Feb 17 15:03:36.793890 master-0 kubenswrapper[8018]: W0217 15:03:36.793827 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50c51fe2_32aa_430f_8da0_7cf3b9519131.slice/crio-b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d WatchSource:0}: Error finding container b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d: Status 404 returned error can't find the container with id b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d Feb 17 15:03:36.863856 master-0 kubenswrapper[8018]: I0217 15:03:36.860164 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerStarted","Data":"43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2"} Feb 17 15:03:36.863856 master-0 kubenswrapper[8018]: I0217 15:03:36.860765 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:36.867874 master-0 kubenswrapper[8018]: I0217 15:03:36.864593 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"580b240a-a806-454d-ab19-8f193a8d9ca2","Type":"ContainerStarted","Data":"cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb"} Feb 17 15:03:36.867874 master-0 kubenswrapper[8018]: I0217 15:03:36.864684 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:03:36.867874 master-0 kubenswrapper[8018]: I0217 15:03:36.864715 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:03:36.867874 master-0 kubenswrapper[8018]: I0217 15:03:36.867123 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"e96d7161de590628bad20a520afcf9b1363c2b5f7629d556a379b4230528784f"} Feb 17 15:03:36.873923 master-0 kubenswrapper[8018]: I0217 15:03:36.873544 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerStarted","Data":"8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144"} Feb 17 15:03:36.875982 master-0 kubenswrapper[8018]: I0217 15:03:36.875805 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerStarted","Data":"b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d"} Feb 17 15:03:36.885612 master-0 kubenswrapper[8018]: I0217 15:03:36.885572 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerStarted","Data":"8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1"} Feb 17 15:03:36.890311 master-0 kubenswrapper[8018]: I0217 15:03:36.889665 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" event={"ID":"4be2df82-c77a-4d26-9498-fa3beea54b81","Type":"ContainerStarted","Data":"53695733f72721a1db3f525ebfe99427ae62ce35e93969fd9d5d4881069cc71d"} Feb 17 15:03:36.891932 master-0 kubenswrapper[8018]: I0217 15:03:36.891892 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"830d120b-fcb6-47ca-a3a0-aa82dc8a3874","Type":"ContainerStarted","Data":"dc7327dc26530c61d96a947638b6ac6c4897aa3d1e3fc71a4f60fd72d1c69c0d"} Feb 17 15:03:36.912959 master-0 kubenswrapper[8018]: I0217 15:03:36.912918 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" event={"ID":"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699","Type":"ContainerStarted","Data":"a70467cfebeb2010b6522dcc70b34dceec0966dcab30e18237d8f96992b6b1d1"} Feb 17 15:03:36.939704 master-0 kubenswrapper[8018]: I0217 15:03:36.939651 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:03:36.948234 master-0 kubenswrapper[8018]: I0217 15:03:36.945513 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:36.971477 master-0 kubenswrapper[8018]: I0217 15:03:36.968573 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-865765995-c58rq"] Feb 17 15:03:37.035500 master-0 kubenswrapper[8018]: I0217 15:03:37.035327 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-2ffzt"] Feb 17 15:03:37.036093 master-0 kubenswrapper[8018]: I0217 15:03:37.035912 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228485 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnhjw\" (UniqueName: \"kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228820 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228846 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228872 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228897 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228916 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228949 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228971 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.228991 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.229016 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.229044 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.229064 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.229087 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.229344 master-0 kubenswrapper[8018]: I0217 15:03:37.229109 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331623 master-0 kubenswrapper[8018]: I0217 15:03:37.331560 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331623 master-0 kubenswrapper[8018]: I0217 15:03:37.331604 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331623 master-0 kubenswrapper[8018]: I0217 15:03:37.331624 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331646 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331666 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331687 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnhjw\" (UniqueName: \"kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331706 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331721 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331741 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331758 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331774 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331800 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331814 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.331840 master-0 kubenswrapper[8018]: I0217 15:03:37.331828 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.332176 master-0 kubenswrapper[8018]: I0217 15:03:37.331969 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332592 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332665 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332712 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332740 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332927 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332978 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.332987 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.333073 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.333091 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.333228 master-0 kubenswrapper[8018]: I0217 15:03:37.333113 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.343785 master-0 kubenswrapper[8018]: I0217 15:03:37.339386 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.343785 master-0 kubenswrapper[8018]: I0217 15:03:37.340302 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.357417 master-0 kubenswrapper[8018]: I0217 15:03:37.352393 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnhjw\" (UniqueName: \"kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.400525 master-0 kubenswrapper[8018]: I0217 15:03:37.400494 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:03:37.926874 master-0 kubenswrapper[8018]: I0217 15:03:37.926405 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wxhtx"] Feb 17 15:03:37.934355 master-0 kubenswrapper[8018]: I0217 15:03:37.934289 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:37.944414 master-0 kubenswrapper[8018]: I0217 15:03:37.944380 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:03:37.944778 master-0 kubenswrapper[8018]: I0217 15:03:37.944738 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"d93b40433fb9724e1f5467feb33fb43b6ecba885ae346a6f96e425da8156ece5"} Feb 17 15:03:37.945230 master-0 kubenswrapper[8018]: I0217 15:03:37.945008 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:03:37.945852 master-0 kubenswrapper[8018]: I0217 15:03:37.945809 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:03:37.949865 master-0 kubenswrapper[8018]: I0217 15:03:37.949743 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:03:37.950493 master-0 kubenswrapper[8018]: I0217 15:03:37.950320 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"580b240a-a806-454d-ab19-8f193a8d9ca2","Type":"ContainerStarted","Data":"dcdeeb6985f895a6d59b345be94e95ea3c9c558f1f7b7901594a31fa91429102"} Feb 17 15:03:37.954336 master-0 kubenswrapper[8018]: I0217 15:03:37.953981 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wxhtx"] Feb 17 15:03:37.957082 master-0 kubenswrapper[8018]: I0217 15:03:37.956286 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bnllz" event={"ID":"fce9579e-7383-421e-95dd-8f8b786817f9","Type":"ContainerStarted","Data":"011ac35a9e88f4031bfda95ad21fc5f32546d3bce742920eff327737939a9149"} Feb 17 15:03:37.957082 master-0 kubenswrapper[8018]: I0217 15:03:37.956329 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bnllz" event={"ID":"fce9579e-7383-421e-95dd-8f8b786817f9","Type":"ContainerStarted","Data":"0e7d84db6cc5421f3ad924f1654eb9f1e1c039e36b910d6373dbe3a42e75bb32"} Feb 17 15:03:37.964143 master-0 kubenswrapper[8018]: I0217 15:03:37.961760 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" event={"ID":"4b2b7830-6ee0-4d87-a57b-dc668de4b39a","Type":"ContainerStarted","Data":"a064acf990271c7eec329583d51e5875524fd81d3348702ae0a9ce02da79158b"} Feb 17 15:03:37.964143 master-0 kubenswrapper[8018]: I0217 15:03:37.961805 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" event={"ID":"4b2b7830-6ee0-4d87-a57b-dc668de4b39a","Type":"ContainerStarted","Data":"59a6b8bcc092c97904f8b1bcc967ad7cedea30ca4f542bee28000425c8e05bc9"} Feb 17 15:03:37.964143 master-0 kubenswrapper[8018]: I0217 15:03:37.964117 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" event={"ID":"08e27254-e906-484a-b346-036f898be3ae","Type":"ContainerStarted","Data":"19cda7a394a37c5460805afe2e930eb516d6043415ac023af6ae0f17e015877c"} Feb 17 15:03:37.966758 master-0 kubenswrapper[8018]: I0217 15:03:37.964818 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:37.971352 master-0 kubenswrapper[8018]: I0217 15:03:37.971082 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9f31fcfe-33ed-4e31-a12c-cb344093dcf4","Type":"ContainerStarted","Data":"ebb84869ff87ab53933f534e8072352d2827c34650aa88de3ed7f3c6446e7b63"} Feb 17 15:03:37.971352 master-0 kubenswrapper[8018]: I0217 15:03:37.971136 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9f31fcfe-33ed-4e31-a12c-cb344093dcf4","Type":"ContainerStarted","Data":"78cbd9f546830dd615de766b10a67b6a810a97884bc18b2f0df8903e6fb6fdc5"} Feb 17 15:03:37.978748 master-0 kubenswrapper[8018]: I0217 15:03:37.978671 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:03:37.978959 master-0 kubenswrapper[8018]: I0217 15:03:37.978915 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerStarted","Data":"58400ac8b210abe6d74d057999272a3e2cdb3a6a4ce0fbdbf1173716a460becc"} Feb 17 15:03:37.979021 master-0 kubenswrapper[8018]: I0217 15:03:37.978972 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerStarted","Data":"d03b5b01eebc01049f52508b9cb6557295a244f02f7925b66faf26d4de1e8764"} Feb 17 15:03:37.982092 master-0 kubenswrapper[8018]: I0217 15:03:37.982052 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"830d120b-fcb6-47ca-a3a0-aa82dc8a3874","Type":"ContainerStarted","Data":"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24"} Feb 17 15:03:37.982194 master-0 kubenswrapper[8018]: I0217 15:03:37.982172 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" containerName="installer" containerID="cri-o://5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24" gracePeriod=30 Feb 17 15:03:37.986015 master-0 kubenswrapper[8018]: I0217 15:03:37.985981 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerStarted","Data":"76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908"} Feb 17 15:03:37.986496 master-0 kubenswrapper[8018]: I0217 15:03:37.986479 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:03:37.994552 master-0 kubenswrapper[8018]: I0217 15:03:37.994510 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerStarted","Data":"d2819b5ec7544398c667e332b7fb3d2d85cf71bbbb7f6160fcca20bd85436f17"} Feb 17 15:03:37.994552 master-0 kubenswrapper[8018]: I0217 15:03:37.994551 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerStarted","Data":"c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7"} Feb 17 15:03:37.994999 master-0 kubenswrapper[8018]: I0217 15:03:37.994976 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:38.002300 master-0 kubenswrapper[8018]: I0217 15:03:38.002250 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" event={"ID":"fc76384d-b288-4d30-bc77-f696b62a5f30","Type":"ContainerStarted","Data":"2ed2cfe851436b124a5e98731c9310d6e3b223382cebdbe35801f130c5225734"} Feb 17 15:03:38.002412 master-0 kubenswrapper[8018]: I0217 15:03:38.002303 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" event={"ID":"fc76384d-b288-4d30-bc77-f696b62a5f30","Type":"ContainerStarted","Data":"b5fecd4da15734364faf1abe101a2dc053a398e33d72ee7568251089683d7a6a"} Feb 17 15:03:38.008167 master-0 kubenswrapper[8018]: I0217 15:03:38.008122 8018 generic.go:334] "Generic (PLEG): container finished" podID="1d481a79-f565-4c7f-84cc-207fc3117c23" containerID="2f2131dad98f27e1c73aa268ad99c1866a1a7604c47baa9d4290fb47581335fc" exitCode=0 Feb 17 15:03:38.008313 master-0 kubenswrapper[8018]: I0217 15:03:38.008175 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" event={"ID":"1d481a79-f565-4c7f-84cc-207fc3117c23","Type":"ContainerDied","Data":"2f2131dad98f27e1c73aa268ad99c1866a1a7604c47baa9d4290fb47581335fc"} Feb 17 15:03:38.009757 master-0 kubenswrapper[8018]: I0217 15:03:38.009737 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" event={"ID":"124ba199-b79a-4e5c-8512-cc0ae50f73c8","Type":"ContainerStarted","Data":"82a4950a547d0a59e18c269c45642d4e42307ae5014626ff584ece03ffa671c2"} Feb 17 15:03:38.038302 master-0 kubenswrapper[8018]: I0217 15:03:38.038260 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" event={"ID":"257db04b-7203-4a1d-b3d4-bd4db258a3cc","Type":"ContainerStarted","Data":"338d42799a850815bbf7c690a296f96f6a7d11512c76af739d01dcad77df545d"} Feb 17 15:03:38.039238 master-0 kubenswrapper[8018]: I0217 15:03:38.039210 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:38.050757 master-0 kubenswrapper[8018]: I0217 15:03:38.050717 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.051066 master-0 kubenswrapper[8018]: I0217 15:03:38.051035 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.051214 master-0 kubenswrapper[8018]: I0217 15:03:38.051192 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwptc\" (UniqueName: \"kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.052630 master-0 kubenswrapper[8018]: I0217 15:03:38.052595 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" event={"ID":"d65336fb-5671-4f5b-a5ff-9000eed0fdd3","Type":"ContainerStarted","Data":"6dc82a228f0a8a739e91f0b1e4c181cb28b029622045918a39fd3d324199188b"} Feb 17 15:03:38.056186 master-0 kubenswrapper[8018]: I0217 15:03:38.056149 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:03:38.059133 master-0 kubenswrapper[8018]: I0217 15:03:38.056610 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.056597265 podStartE2EDuration="3.056597265s" podCreationTimestamp="2026-02-17 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:38.054582638 +0000 UTC m=+50.806925698" watchObservedRunningTime="2026-02-17 15:03:38.056597265 +0000 UTC m=+50.808940315" Feb 17 15:03:38.065767 master-0 kubenswrapper[8018]: I0217 15:03:38.065717 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" event={"ID":"ec3c02d7-1607-4305-9380-ba8fc6018b60","Type":"ContainerStarted","Data":"60a357860a4bf6848914cb16ba4e2389f439f69e27bc7ca67dd28f0f1be9934b"} Feb 17 15:03:38.071945 master-0 kubenswrapper[8018]: I0217 15:03:38.070392 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:03:38.083109 master-0 kubenswrapper[8018]: I0217 15:03:38.082863 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:03:38.084748 master-0 kubenswrapper[8018]: I0217 15:03:38.084693 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=5.084675039 podStartE2EDuration="5.084675039s" podCreationTimestamp="2026-02-17 15:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:38.083759647 +0000 UTC m=+50.836102707" watchObservedRunningTime="2026-02-17 15:03:38.084675039 +0000 UTC m=+50.837018089" Feb 17 15:03:38.089118 master-0 kubenswrapper[8018]: I0217 15:03:38.088825 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.098723 master-0 kubenswrapper[8018]: I0217 15:03:38.098679 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:03:38.107296 master-0 kubenswrapper[8018]: E0217 15:03:38.107264 8018 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod830d120b_fcb6_47ca_a3a0_aa82dc8a3874.slice/crio-5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:03:38.136048 master-0 kubenswrapper[8018]: I0217 15:03:38.135971 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" podStartSLOduration=1.135954878 podStartE2EDuration="1.135954878s" podCreationTimestamp="2026-02-17 15:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:38.135263381 +0000 UTC m=+50.887606441" watchObservedRunningTime="2026-02-17 15:03:38.135954878 +0000 UTC m=+50.888297928" Feb 17 15:03:38.152885 master-0 kubenswrapper[8018]: I0217 15:03:38.152839 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.153082 master-0 kubenswrapper[8018]: I0217 15:03:38.153021 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwptc\" (UniqueName: \"kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.153404 master-0 kubenswrapper[8018]: I0217 15:03:38.153363 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.154344 master-0 kubenswrapper[8018]: E0217 15:03:38.154300 8018 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 15:03:38.154436 master-0 kubenswrapper[8018]: E0217 15:03:38.154416 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls podName:8d317dcb-ea6a-4066-b197-5ee960dec01a nodeName:}" failed. No retries permitted until 2026-02-17 15:03:38.65438723 +0000 UTC m=+51.406730320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls") pod "dns-default-wxhtx" (UID: "8d317dcb-ea6a-4066-b197-5ee960dec01a") : secret "dns-default-metrics-tls" not found Feb 17 15:03:38.160738 master-0 kubenswrapper[8018]: I0217 15:03:38.160704 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.203892 master-0 kubenswrapper[8018]: I0217 15:03:38.202736 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwptc\" (UniqueName: \"kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.222893 master-0 kubenswrapper[8018]: I0217 15:03:38.222811 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=11.22278972 podStartE2EDuration="11.22278972s" podCreationTimestamp="2026-02-17 15:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:38.218850555 +0000 UTC m=+50.971193605" watchObservedRunningTime="2026-02-17 15:03:38.22278972 +0000 UTC m=+50.975132760" Feb 17 15:03:38.255303 master-0 kubenswrapper[8018]: I0217 15:03:38.254971 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzwn\" (UniqueName: \"kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.255303 master-0 kubenswrapper[8018]: I0217 15:03:38.255023 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.255303 master-0 kubenswrapper[8018]: I0217 15:03:38.255079 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.299637 master-0 kubenswrapper[8018]: I0217 15:03:38.299537 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 17 15:03:38.303748 master-0 kubenswrapper[8018]: I0217 15:03:38.300259 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.303748 master-0 kubenswrapper[8018]: I0217 15:03:38.301559 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" podStartSLOduration=5.219752824 podStartE2EDuration="19.301540248s" podCreationTimestamp="2026-02-17 15:03:19 +0000 UTC" firstStartedPulling="2026-02-17 15:03:22.347096288 +0000 UTC m=+35.099439348" lastFinishedPulling="2026-02-17 15:03:36.428883712 +0000 UTC m=+49.181226772" observedRunningTime="2026-02-17 15:03:38.292901401 +0000 UTC m=+51.045244461" watchObservedRunningTime="2026-02-17 15:03:38.301540248 +0000 UTC m=+51.053883298" Feb 17 15:03:38.303748 master-0 kubenswrapper[8018]: I0217 15:03:38.303672 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 17 15:03:38.358477 master-0 kubenswrapper[8018]: I0217 15:03:38.357483 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzwn\" (UniqueName: \"kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.358477 master-0 kubenswrapper[8018]: I0217 15:03:38.357732 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.358477 master-0 kubenswrapper[8018]: I0217 15:03:38.357774 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.359935 master-0 kubenswrapper[8018]: I0217 15:03:38.358802 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.359935 master-0 kubenswrapper[8018]: I0217 15:03:38.359528 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.378268 master-0 kubenswrapper[8018]: I0217 15:03:38.377853 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzwn\" (UniqueName: \"kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn\") pod \"redhat-operators-7x72v\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.380359 master-0 kubenswrapper[8018]: I0217 15:03:38.380327 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_830d120b-fcb6-47ca-a3a0-aa82dc8a3874/installer/0.log" Feb 17 15:03:38.380424 master-0 kubenswrapper[8018]: I0217 15:03:38.380387 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:38.397516 master-0 kubenswrapper[8018]: I0217 15:03:38.396632 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tzv2h"] Feb 17 15:03:38.399378 master-0 kubenswrapper[8018]: E0217 15:03:38.399341 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" containerName="installer" Feb 17 15:03:38.399378 master-0 kubenswrapper[8018]: I0217 15:03:38.399374 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" containerName="installer" Feb 17 15:03:38.399547 master-0 kubenswrapper[8018]: I0217 15:03:38.399518 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" containerName="installer" Feb 17 15:03:38.400690 master-0 kubenswrapper[8018]: I0217 15:03:38.400650 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.408818 master-0 kubenswrapper[8018]: I0217 15:03:38.408424 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podStartSLOduration=15.408398989 podStartE2EDuration="15.408398989s" podCreationTimestamp="2026-02-17 15:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:38.403259687 +0000 UTC m=+51.155602747" watchObservedRunningTime="2026-02-17 15:03:38.408398989 +0000 UTC m=+51.160742039" Feb 17 15:03:38.430943 master-0 kubenswrapper[8018]: I0217 15:03:38.430908 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:03:38.459133 master-0 kubenswrapper[8018]: I0217 15:03:38.458938 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access\") pod \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " Feb 17 15:03:38.459133 master-0 kubenswrapper[8018]: I0217 15:03:38.459045 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock\") pod \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " Feb 17 15:03:38.459133 master-0 kubenswrapper[8018]: I0217 15:03:38.459070 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir\") pod \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\" (UID: \"830d120b-fcb6-47ca-a3a0-aa82dc8a3874\") " Feb 17 15:03:38.459133 master-0 kubenswrapper[8018]: I0217 15:03:38.459196 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.459524 master-0 kubenswrapper[8018]: I0217 15:03:38.459259 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.459524 master-0 kubenswrapper[8018]: I0217 15:03:38.459315 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.464522 master-0 kubenswrapper[8018]: I0217 15:03:38.461549 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "830d120b-fcb6-47ca-a3a0-aa82dc8a3874" (UID: "830d120b-fcb6-47ca-a3a0-aa82dc8a3874"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:38.464522 master-0 kubenswrapper[8018]: I0217 15:03:38.461608 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock" (OuterVolumeSpecName: "var-lock") pod "830d120b-fcb6-47ca-a3a0-aa82dc8a3874" (UID: "830d120b-fcb6-47ca-a3a0-aa82dc8a3874"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:38.473836 master-0 kubenswrapper[8018]: I0217 15:03:38.472020 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "830d120b-fcb6-47ca-a3a0-aa82dc8a3874" (UID: "830d120b-fcb6-47ca-a3a0-aa82dc8a3874"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562133 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562209 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562236 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562277 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562333 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8s7\" (UniqueName: \"kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562370 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562384 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562395 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/830d120b-fcb6-47ca-a3a0-aa82dc8a3874-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562539 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.570482 master-0 kubenswrapper[8018]: I0217 15:03:38.562706 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.600436 master-0 kubenswrapper[8018]: I0217 15:03:38.598781 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.663121 master-0 kubenswrapper[8018]: I0217 15:03:38.663082 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.663121 master-0 kubenswrapper[8018]: I0217 15:03:38.663129 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8s7\" (UniqueName: \"kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.663315 master-0 kubenswrapper[8018]: I0217 15:03:38.663166 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.663705 master-0 kubenswrapper[8018]: I0217 15:03:38.663657 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.666394 master-0 kubenswrapper[8018]: I0217 15:03:38.666371 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.669419 master-0 kubenswrapper[8018]: I0217 15:03:38.669386 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:03:38.684389 master-0 kubenswrapper[8018]: I0217 15:03:38.683044 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8s7\" (UniqueName: \"kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.700328 master-0 kubenswrapper[8018]: I0217 15:03:38.700274 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:03:38.716634 master-0 kubenswrapper[8018]: W0217 15:03:38.715192 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac9a5d3_569e_4434_839e_691eacbe13df.slice/crio-ea57ef236d3ee5f1de956103af094e831cfbfe52180fca3d3c025be0d3754a52 WatchSource:0}: Error finding container ea57ef236d3ee5f1de956103af094e831cfbfe52180fca3d3c025be0d3754a52: Status 404 returned error can't find the container with id ea57ef236d3ee5f1de956103af094e831cfbfe52180fca3d3c025be0d3754a52 Feb 17 15:03:38.737713 master-0 kubenswrapper[8018]: I0217 15:03:38.737675 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:03:38.804100 master-0 kubenswrapper[8018]: I0217 15:03:38.802219 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:38.830484 master-0 kubenswrapper[8018]: I0217 15:03:38.823205 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:38.861416 master-0 kubenswrapper[8018]: I0217 15:03:38.861099 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:38.896854 master-0 kubenswrapper[8018]: I0217 15:03:38.895212 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:03:38.897596 master-0 kubenswrapper[8018]: I0217 15:03:38.897568 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:38.945530 master-0 kubenswrapper[8018]: I0217 15:03:38.945487 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:03:38.946677 master-0 kubenswrapper[8018]: I0217 15:03:38.946591 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 17 15:03:38.969001 master-0 kubenswrapper[8018]: I0217 15:03:38.966644 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:38.969001 master-0 kubenswrapper[8018]: I0217 15:03:38.966691 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwk82\" (UniqueName: \"kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:38.969001 master-0 kubenswrapper[8018]: I0217 15:03:38.966809 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.069018 master-0 kubenswrapper[8018]: I0217 15:03:39.068128 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwk82\" (UniqueName: \"kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.069018 master-0 kubenswrapper[8018]: I0217 15:03:39.068194 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.069018 master-0 kubenswrapper[8018]: I0217 15:03:39.068233 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.069018 master-0 kubenswrapper[8018]: I0217 15:03:39.068837 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.075750 master-0 kubenswrapper[8018]: I0217 15:03:39.075489 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.081124 master-0 kubenswrapper[8018]: I0217 15:03:39.081064 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tzv2h" event={"ID":"aa267e55-eef2-447f-b2ff-57c1ec2930be","Type":"ContainerStarted","Data":"50580897aab729847bb16b1be89c08ccaf45ebad432b32e9d2c48074ace08db5"} Feb 17 15:03:39.086174 master-0 kubenswrapper[8018]: I0217 15:03:39.084444 8018 generic.go:334] "Generic (PLEG): container finished" podID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerID="2b94573c328e435e16466b38efd1dd63232f75cf11bf6043b00285328ed96b63" exitCode=0 Feb 17 15:03:39.086174 master-0 kubenswrapper[8018]: I0217 15:03:39.084506 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerDied","Data":"2b94573c328e435e16466b38efd1dd63232f75cf11bf6043b00285328ed96b63"} Feb 17 15:03:39.086174 master-0 kubenswrapper[8018]: I0217 15:03:39.084525 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerStarted","Data":"ea57ef236d3ee5f1de956103af094e831cfbfe52180fca3d3c025be0d3754a52"} Feb 17 15:03:39.091918 master-0 kubenswrapper[8018]: I0217 15:03:39.091230 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwk82\" (UniqueName: \"kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82\") pod \"certified-operators-xqt6f\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.093960 master-0 kubenswrapper[8018]: I0217 15:03:39.093906 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" event={"ID":"1d481a79-f565-4c7f-84cc-207fc3117c23","Type":"ContainerStarted","Data":"7f26c9a5bc2b6db19ac14da880625f5ce3ec00a224b48d2f051fe7b54591a5cd"} Feb 17 15:03:39.093960 master-0 kubenswrapper[8018]: I0217 15:03:39.093958 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" event={"ID":"1d481a79-f565-4c7f-84cc-207fc3117c23","Type":"ContainerStarted","Data":"8ec5073c7897c6e113372bff0a596e436307269bfd21b5fc7b0af3fa3e64520f"} Feb 17 15:03:39.096379 master-0 kubenswrapper[8018]: I0217 15:03:39.096343 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"03da22e3-956d-4c8a-bfd6-c1778e5d627c","Type":"ContainerStarted","Data":"7d00efdad4851844a32b2b8bd4e17fbebfd887cf8eba9c8198aa34f66fbdd5b6"} Feb 17 15:03:39.098750 master-0 kubenswrapper[8018]: I0217 15:03:39.098727 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_830d120b-fcb6-47ca-a3a0-aa82dc8a3874/installer/0.log" Feb 17 15:03:39.098823 master-0 kubenswrapper[8018]: I0217 15:03:39.098761 8018 generic.go:334] "Generic (PLEG): container finished" podID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" containerID="5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24" exitCode=1 Feb 17 15:03:39.099184 master-0 kubenswrapper[8018]: I0217 15:03:39.099161 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 17 15:03:39.099605 master-0 kubenswrapper[8018]: I0217 15:03:39.099552 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"830d120b-fcb6-47ca-a3a0-aa82dc8a3874","Type":"ContainerDied","Data":"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24"} Feb 17 15:03:39.099790 master-0 kubenswrapper[8018]: I0217 15:03:39.099663 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"830d120b-fcb6-47ca-a3a0-aa82dc8a3874","Type":"ContainerDied","Data":"dc7327dc26530c61d96a947638b6ac6c4897aa3d1e3fc71a4f60fd72d1c69c0d"} Feb 17 15:03:39.099790 master-0 kubenswrapper[8018]: I0217 15:03:39.099734 8018 scope.go:117] "RemoveContainer" containerID="5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24" Feb 17 15:03:39.101469 master-0 kubenswrapper[8018]: I0217 15:03:39.101071 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:39.107334 master-0 kubenswrapper[8018]: I0217 15:03:39.107260 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:39.125223 master-0 kubenswrapper[8018]: I0217 15:03:39.124969 8018 scope.go:117] "RemoveContainer" containerID="5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24" Feb 17 15:03:39.126471 master-0 kubenswrapper[8018]: E0217 15:03:39.126430 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24\": container with ID starting with 5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24 not found: ID does not exist" containerID="5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24" Feb 17 15:03:39.126570 master-0 kubenswrapper[8018]: I0217 15:03:39.126530 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24"} err="failed to get container status \"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24\": rpc error: code = NotFound desc = could not find container \"5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24\": container with ID starting with 5034d83f84ab69660043a048e045ef6d6e8aba1bc5c2091f56b5108efb860c24 not found: ID does not exist" Feb 17 15:03:39.161328 master-0 kubenswrapper[8018]: I0217 15:03:39.161266 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" podStartSLOduration=6.897958214 podStartE2EDuration="21.161237309s" podCreationTimestamp="2026-02-17 15:03:18 +0000 UTC" firstStartedPulling="2026-02-17 15:03:21.271152343 +0000 UTC m=+34.023495393" lastFinishedPulling="2026-02-17 15:03:35.534431438 +0000 UTC m=+48.286774488" observedRunningTime="2026-02-17 15:03:39.130769958 +0000 UTC m=+51.883113008" watchObservedRunningTime="2026-02-17 15:03:39.161237309 +0000 UTC m=+51.913580359" Feb 17 15:03:39.162135 master-0 kubenswrapper[8018]: I0217 15:03:39.162055 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:39.170388 master-0 kubenswrapper[8018]: I0217 15:03:39.169671 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wxhtx"] Feb 17 15:03:39.170388 master-0 kubenswrapper[8018]: I0217 15:03:39.169717 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 17 15:03:39.174041 master-0 kubenswrapper[8018]: W0217 15:03:39.173912 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d317dcb_ea6a_4066_b197_5ee960dec01a.slice/crio-bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936 WatchSource:0}: Error finding container bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936: Status 404 returned error can't find the container with id bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936 Feb 17 15:03:39.221092 master-0 kubenswrapper[8018]: I0217 15:03:39.220878 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:03:39.420886 master-0 kubenswrapper[8018]: I0217 15:03:39.420834 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:03:39.465380 master-0 kubenswrapper[8018]: I0217 15:03:39.465236 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="830d120b-fcb6-47ca-a3a0-aa82dc8a3874" path="/var/lib/kubelet/pods/830d120b-fcb6-47ca-a3a0-aa82dc8a3874/volumes" Feb 17 15:03:40.104162 master-0 kubenswrapper[8018]: I0217 15:03:40.103783 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxhtx" event={"ID":"8d317dcb-ea6a-4066-b197-5ee960dec01a","Type":"ContainerStarted","Data":"bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936"} Feb 17 15:03:40.105586 master-0 kubenswrapper[8018]: I0217 15:03:40.105476 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"03da22e3-956d-4c8a-bfd6-c1778e5d627c","Type":"ContainerStarted","Data":"848358e86030aaad08f0f93cbd72a6dd3c9d1bf771c63059da694d462594c54f"} Feb 17 15:03:40.108570 master-0 kubenswrapper[8018]: I0217 15:03:40.108534 8018 generic.go:334] "Generic (PLEG): container finished" podID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerID="ddd23c1c0a55e91ca0a9f81dbad6adfbdddc033a3e7f4cb986cfedd2d53a44cf" exitCode=0 Feb 17 15:03:40.108870 master-0 kubenswrapper[8018]: I0217 15:03:40.108841 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqt6f" event={"ID":"fa4b45c7-fcd1-483b-97ae-df90a7c06f11","Type":"ContainerDied","Data":"ddd23c1c0a55e91ca0a9f81dbad6adfbdddc033a3e7f4cb986cfedd2d53a44cf"} Feb 17 15:03:40.108870 master-0 kubenswrapper[8018]: I0217 15:03:40.108866 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqt6f" event={"ID":"fa4b45c7-fcd1-483b-97ae-df90a7c06f11","Type":"ContainerStarted","Data":"d245dd9e77696551e86dbe4d5f0bbdca0c48334efedc1d3bb182430d7757086e"} Feb 17 15:03:40.112414 master-0 kubenswrapper[8018]: I0217 15:03:40.112371 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tzv2h" event={"ID":"aa267e55-eef2-447f-b2ff-57c1ec2930be","Type":"ContainerStarted","Data":"14cca6b4117fc503cb2791e54523ece5478e05a309b8a2652aceebf3a06db904"} Feb 17 15:03:40.113601 master-0 kubenswrapper[8018]: I0217 15:03:40.113557 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerName="controller-manager" containerID="cri-o://60a357860a4bf6848914cb16ba4e2389f439f69e27bc7ca67dd28f0f1be9934b" gracePeriod=30 Feb 17 15:03:40.126617 master-0 kubenswrapper[8018]: I0217 15:03:40.125937 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.125917406 podStartE2EDuration="2.125917406s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:40.123326214 +0000 UTC m=+52.875669274" watchObservedRunningTime="2026-02-17 15:03:40.125917406 +0000 UTC m=+52.878260446" Feb 17 15:03:40.171684 master-0 kubenswrapper[8018]: I0217 15:03:40.171591 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tzv2h" podStartSLOduration=2.171572831 podStartE2EDuration="2.171572831s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:40.16943729 +0000 UTC m=+52.921780360" watchObservedRunningTime="2026-02-17 15:03:40.171572831 +0000 UTC m=+52.923915891" Feb 17 15:03:40.267537 master-0 kubenswrapper[8018]: I0217 15:03:40.266747 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:03:40.267834 master-0 kubenswrapper[8018]: I0217 15:03:40.267676 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.281346 master-0 kubenswrapper[8018]: I0217 15:03:40.281288 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:03:40.392532 master-0 kubenswrapper[8018]: I0217 15:03:40.392486 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbkk\" (UniqueName: \"kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.392695 master-0 kubenswrapper[8018]: I0217 15:03:40.392550 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.392695 master-0 kubenswrapper[8018]: I0217 15:03:40.392609 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.493634 master-0 kubenswrapper[8018]: I0217 15:03:40.493586 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.493857 master-0 kubenswrapper[8018]: I0217 15:03:40.493668 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbkk\" (UniqueName: \"kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.493857 master-0 kubenswrapper[8018]: I0217 15:03:40.493697 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.494425 master-0 kubenswrapper[8018]: I0217 15:03:40.494114 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.494509 master-0 kubenswrapper[8018]: I0217 15:03:40.494470 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.519839 master-0 kubenswrapper[8018]: I0217 15:03:40.519787 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbkk\" (UniqueName: \"kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk\") pod \"community-operators-662mc\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.592019 master-0 kubenswrapper[8018]: I0217 15:03:40.591957 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:40.592198 master-0 kubenswrapper[8018]: I0217 15:03:40.592045 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-662mc" Feb 17 15:03:40.592616 master-0 kubenswrapper[8018]: I0217 15:03:40.592582 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: I0217 15:03:40.597991 8018 patch_prober.go:28] interesting pod/apiserver-6bd884947c-tdlbn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]log ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]etcd ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/openshift.io-startinformers ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:03:40.598041 master-0 kubenswrapper[8018]: livez check failed Feb 17 15:03:40.598608 master-0 kubenswrapper[8018]: I0217 15:03:40.598049 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" podUID="1d481a79-f565-4c7f-84cc-207fc3117c23" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:03:41.126197 master-0 kubenswrapper[8018]: I0217 15:03:41.125935 8018 generic.go:334] "Generic (PLEG): container finished" podID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerID="60a357860a4bf6848914cb16ba4e2389f439f69e27bc7ca67dd28f0f1be9934b" exitCode=0 Feb 17 15:03:41.126197 master-0 kubenswrapper[8018]: I0217 15:03:41.126007 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" event={"ID":"ec3c02d7-1607-4305-9380-ba8fc6018b60","Type":"ContainerDied","Data":"60a357860a4bf6848914cb16ba4e2389f439f69e27bc7ca67dd28f0f1be9934b"} Feb 17 15:03:41.264018 master-0 kubenswrapper[8018]: I0217 15:03:41.263320 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:03:41.264825 master-0 kubenswrapper[8018]: I0217 15:03:41.264526 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.281758 master-0 kubenswrapper[8018]: I0217 15:03:41.281696 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:03:41.421154 master-0 kubenswrapper[8018]: I0217 15:03:41.421039 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.421154 master-0 kubenswrapper[8018]: I0217 15:03:41.421100 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.421154 master-0 kubenswrapper[8018]: I0217 15:03:41.421123 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xz5w\" (UniqueName: \"kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.522757 master-0 kubenswrapper[8018]: I0217 15:03:41.522368 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.522757 master-0 kubenswrapper[8018]: I0217 15:03:41.522534 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.522757 master-0 kubenswrapper[8018]: I0217 15:03:41.522561 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xz5w\" (UniqueName: \"kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.523056 master-0 kubenswrapper[8018]: I0217 15:03:41.523023 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.523125 master-0 kubenswrapper[8018]: I0217 15:03:41.523093 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.548247 master-0 kubenswrapper[8018]: I0217 15:03:41.548184 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xz5w\" (UniqueName: \"kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w\") pod \"redhat-marketplace-sft6r\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.595765 master-0 kubenswrapper[8018]: I0217 15:03:41.595642 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:03:41.678563 master-0 kubenswrapper[8018]: I0217 15:03:41.674981 8018 patch_prober.go:28] interesting pod/controller-manager-67d67c799d-b9bj6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection refused" start-of-body= Feb 17 15:03:41.678563 master-0 kubenswrapper[8018]: I0217 15:03:41.675053 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection refused" Feb 17 15:03:43.138145 master-0 kubenswrapper[8018]: I0217 15:03:43.138110 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" event={"ID":"ec3c02d7-1607-4305-9380-ba8fc6018b60","Type":"ContainerDied","Data":"e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6"} Feb 17 15:03:43.138658 master-0 kubenswrapper[8018]: I0217 15:03:43.138150 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62b0eab173cca0600011cfea6f70c094301da812d91a988a342957fc65633d6" Feb 17 15:03:43.287830 master-0 kubenswrapper[8018]: I0217 15:03:43.287764 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:43.456340 master-0 kubenswrapper[8018]: I0217 15:03:43.456132 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca\") pod \"ec3c02d7-1607-4305-9380-ba8fc6018b60\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " Feb 17 15:03:43.456340 master-0 kubenswrapper[8018]: I0217 15:03:43.456190 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert\") pod \"ec3c02d7-1607-4305-9380-ba8fc6018b60\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " Feb 17 15:03:43.456340 master-0 kubenswrapper[8018]: I0217 15:03:43.456236 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxh4f\" (UniqueName: \"kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f\") pod \"ec3c02d7-1607-4305-9380-ba8fc6018b60\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " Feb 17 15:03:43.456340 master-0 kubenswrapper[8018]: I0217 15:03:43.456287 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles\") pod \"ec3c02d7-1607-4305-9380-ba8fc6018b60\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " Feb 17 15:03:43.456511 master-0 kubenswrapper[8018]: I0217 15:03:43.456379 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config\") pod \"ec3c02d7-1607-4305-9380-ba8fc6018b60\" (UID: \"ec3c02d7-1607-4305-9380-ba8fc6018b60\") " Feb 17 15:03:43.457160 master-0 kubenswrapper[8018]: I0217 15:03:43.457110 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec3c02d7-1607-4305-9380-ba8fc6018b60" (UID: "ec3c02d7-1607-4305-9380-ba8fc6018b60"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:43.457199 master-0 kubenswrapper[8018]: I0217 15:03:43.457164 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config" (OuterVolumeSpecName: "config") pod "ec3c02d7-1607-4305-9380-ba8fc6018b60" (UID: "ec3c02d7-1607-4305-9380-ba8fc6018b60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:43.458539 master-0 kubenswrapper[8018]: I0217 15:03:43.458514 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec3c02d7-1607-4305-9380-ba8fc6018b60" (UID: "ec3c02d7-1607-4305-9380-ba8fc6018b60"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:43.460349 master-0 kubenswrapper[8018]: I0217 15:03:43.460315 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f" (OuterVolumeSpecName: "kube-api-access-fxh4f") pod "ec3c02d7-1607-4305-9380-ba8fc6018b60" (UID: "ec3c02d7-1607-4305-9380-ba8fc6018b60"). InnerVolumeSpecName "kube-api-access-fxh4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:43.461163 master-0 kubenswrapper[8018]: I0217 15:03:43.461086 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec3c02d7-1607-4305-9380-ba8fc6018b60" (UID: "ec3c02d7-1607-4305-9380-ba8fc6018b60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:43.526173 master-0 kubenswrapper[8018]: I0217 15:03:43.526138 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:03:43.526329 master-0 kubenswrapper[8018]: E0217 15:03:43.526304 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerName="controller-manager" Feb 17 15:03:43.526329 master-0 kubenswrapper[8018]: I0217 15:03:43.526314 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerName="controller-manager" Feb 17 15:03:43.526449 master-0 kubenswrapper[8018]: I0217 15:03:43.526403 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" containerName="controller-manager" Feb 17 15:03:43.526719 master-0 kubenswrapper[8018]: I0217 15:03:43.526705 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.557484 master-0 kubenswrapper[8018]: I0217 15:03:43.557435 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:43.557484 master-0 kubenswrapper[8018]: I0217 15:03:43.557486 8018 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:43.557690 master-0 kubenswrapper[8018]: I0217 15:03:43.557498 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3c02d7-1607-4305-9380-ba8fc6018b60-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:43.557690 master-0 kubenswrapper[8018]: I0217 15:03:43.557511 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxh4f\" (UniqueName: \"kubernetes.io/projected/ec3c02d7-1607-4305-9380-ba8fc6018b60-kube-api-access-fxh4f\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:43.557690 master-0 kubenswrapper[8018]: I0217 15:03:43.557523 8018 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3c02d7-1607-4305-9380-ba8fc6018b60-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:43.594524 master-0 kubenswrapper[8018]: I0217 15:03:43.594487 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:03:43.597131 master-0 kubenswrapper[8018]: I0217 15:03:43.597075 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:03:43.658945 master-0 kubenswrapper[8018]: I0217 15:03:43.658899 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.658945 master-0 kubenswrapper[8018]: I0217 15:03:43.658959 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.659163 master-0 kubenswrapper[8018]: I0217 15:03:43.659026 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.659163 master-0 kubenswrapper[8018]: I0217 15:03:43.659054 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.659163 master-0 kubenswrapper[8018]: I0217 15:03:43.659082 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.748525 master-0 kubenswrapper[8018]: I0217 15:03:43.748480 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:03:43.759982 master-0 kubenswrapper[8018]: I0217 15:03:43.759857 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.759982 master-0 kubenswrapper[8018]: I0217 15:03:43.759911 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.759982 master-0 kubenswrapper[8018]: I0217 15:03:43.759974 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.760809 master-0 kubenswrapper[8018]: I0217 15:03:43.760750 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.760858 master-0 kubenswrapper[8018]: I0217 15:03:43.760832 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.761895 master-0 kubenswrapper[8018]: I0217 15:03:43.761855 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.763325 master-0 kubenswrapper[8018]: I0217 15:03:43.763278 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.765229 master-0 kubenswrapper[8018]: I0217 15:03:43.765175 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.766592 master-0 kubenswrapper[8018]: I0217 15:03:43.766555 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.769186 master-0 kubenswrapper[8018]: W0217 15:03:43.769153 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cee363d_411b_42ae_8f9f_cfaac068d992.slice/crio-4d2b16ff594ab4bf07b15d7bdb6d613459bd6402bd17141af1161c76a52e5907 WatchSource:0}: Error finding container 4d2b16ff594ab4bf07b15d7bdb6d613459bd6402bd17141af1161c76a52e5907: Status 404 returned error can't find the container with id 4d2b16ff594ab4bf07b15d7bdb6d613459bd6402bd17141af1161c76a52e5907 Feb 17 15:03:43.794166 master-0 kubenswrapper[8018]: I0217 15:03:43.793697 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:43.903600 master-0 kubenswrapper[8018]: I0217 15:03:43.903510 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:44.147478 master-0 kubenswrapper[8018]: I0217 15:03:44.147249 8018 generic.go:334] "Generic (PLEG): container finished" podID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerID="cb6f158d1d6f36179663edca7ac4c45ccbc5d1b74a343aa83cc519a613a49048" exitCode=0 Feb 17 15:03:44.147478 master-0 kubenswrapper[8018]: I0217 15:03:44.147314 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" event={"ID":"124ba199-b79a-4e5c-8512-cc0ae50f73c8","Type":"ContainerDied","Data":"cb6f158d1d6f36179663edca7ac4c45ccbc5d1b74a343aa83cc519a613a49048"} Feb 17 15:03:44.149340 master-0 kubenswrapper[8018]: I0217 15:03:44.149305 8018 generic.go:334] "Generic (PLEG): container finished" podID="e2994de0-1535-423a-90ce-019043cd4b9d" containerID="e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7" exitCode=0 Feb 17 15:03:44.149562 master-0 kubenswrapper[8018]: I0217 15:03:44.149388 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerDied","Data":"e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7"} Feb 17 15:03:44.149562 master-0 kubenswrapper[8018]: I0217 15:03:44.149411 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerStarted","Data":"b1c523b9713fa7186f27a3debf3937c0f49ce44756f46b9804b47f1c69239b70"} Feb 17 15:03:44.154129 master-0 kubenswrapper[8018]: I0217 15:03:44.153986 8018 generic.go:334] "Generic (PLEG): container finished" podID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerID="5d9b6a180c58e9f4d3551ff59a04c354a85779518bac69727c371d488333fa01" exitCode=0 Feb 17 15:03:44.154747 master-0 kubenswrapper[8018]: I0217 15:03:44.154701 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-662mc" event={"ID":"6cee363d-411b-42ae-8f9f-cfaac068d992","Type":"ContainerDied","Data":"5d9b6a180c58e9f4d3551ff59a04c354a85779518bac69727c371d488333fa01"} Feb 17 15:03:44.154806 master-0 kubenswrapper[8018]: I0217 15:03:44.154756 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-662mc" event={"ID":"6cee363d-411b-42ae-8f9f-cfaac068d992","Type":"ContainerStarted","Data":"4d2b16ff594ab4bf07b15d7bdb6d613459bd6402bd17141af1161c76a52e5907"} Feb 17 15:03:44.205029 master-0 kubenswrapper[8018]: I0217 15:03:44.187742 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d67c799d-b9bj6" Feb 17 15:03:44.205029 master-0 kubenswrapper[8018]: I0217 15:03:44.189665 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerName="route-controller-manager" containerID="cri-o://c188c7bd6b9187d110fe690a9041ea1174a1f6faa06d1653091eb88c9dc77813" gracePeriod=30 Feb 17 15:03:44.205029 master-0 kubenswrapper[8018]: I0217 15:03:44.190139 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" event={"ID":"d65336fb-5671-4f5b-a5ff-9000eed0fdd3","Type":"ContainerStarted","Data":"c188c7bd6b9187d110fe690a9041ea1174a1f6faa06d1653091eb88c9dc77813"} Feb 17 15:03:44.205029 master-0 kubenswrapper[8018]: I0217 15:03:44.190173 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:44.219030 master-0 kubenswrapper[8018]: I0217 15:03:44.218984 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:44.601796 master-0 kubenswrapper[8018]: I0217 15:03:44.600830 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:03:44.675836 master-0 kubenswrapper[8018]: I0217 15:03:44.674929 8018 patch_prober.go:28] interesting pod/route-controller-manager-6965bd7478-x8mdg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.40:8443/healthz\": dial tcp 10.128.0.40:8443: connect: connection refused" start-of-body= Feb 17 15:03:44.675836 master-0 kubenswrapper[8018]: I0217 15:03:44.675005 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.40:8443/healthz\": dial tcp 10.128.0.40:8443: connect: connection refused" Feb 17 15:03:44.907151 master-0 kubenswrapper[8018]: I0217 15:03:44.905954 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" podStartSLOduration=19.808619085 podStartE2EDuration="25.905934665s" podCreationTimestamp="2026-02-17 15:03:19 +0000 UTC" firstStartedPulling="2026-02-17 15:03:37.001324796 +0000 UTC m=+49.753667846" lastFinishedPulling="2026-02-17 15:03:43.098640376 +0000 UTC m=+55.850983426" observedRunningTime="2026-02-17 15:03:44.812823623 +0000 UTC m=+57.565166693" watchObservedRunningTime="2026-02-17 15:03:44.905934665 +0000 UTC m=+57.658277715" Feb 17 15:03:44.908875 master-0 kubenswrapper[8018]: I0217 15:03:44.908841 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:44.944726 master-0 kubenswrapper[8018]: I0217 15:03:44.944121 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67d67c799d-b9bj6"] Feb 17 15:03:45.098701 master-0 kubenswrapper[8018]: I0217 15:03:45.098667 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:03:45.202828 master-0 kubenswrapper[8018]: I0217 15:03:45.202706 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" event={"ID":"124ba199-b79a-4e5c-8512-cc0ae50f73c8","Type":"ContainerStarted","Data":"da09e4a5b3dba77dbd04689a11e6d73f307ccd2ac6de0aff2e732163788d68b5"} Feb 17 15:03:45.206697 master-0 kubenswrapper[8018]: I0217 15:03:45.206645 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxhtx" event={"ID":"8d317dcb-ea6a-4066-b197-5ee960dec01a","Type":"ContainerStarted","Data":"6f5ae2879d8c249991c1e3cbe876cd864d9948edb65b2c024c9178d5fc720b58"} Feb 17 15:03:45.206779 master-0 kubenswrapper[8018]: I0217 15:03:45.206698 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxhtx" event={"ID":"8d317dcb-ea6a-4066-b197-5ee960dec01a","Type":"ContainerStarted","Data":"e85d906d78a26bf0839602b36854287ede9d637d0265fe4283ce00933d95aca5"} Feb 17 15:03:45.206812 master-0 kubenswrapper[8018]: I0217 15:03:45.206800 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:03:45.210218 master-0 kubenswrapper[8018]: I0217 15:03:45.210182 8018 generic.go:334] "Generic (PLEG): container finished" podID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerID="c188c7bd6b9187d110fe690a9041ea1174a1f6faa06d1653091eb88c9dc77813" exitCode=0 Feb 17 15:03:45.210293 master-0 kubenswrapper[8018]: I0217 15:03:45.210254 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" event={"ID":"d65336fb-5671-4f5b-a5ff-9000eed0fdd3","Type":"ContainerDied","Data":"c188c7bd6b9187d110fe690a9041ea1174a1f6faa06d1653091eb88c9dc77813"} Feb 17 15:03:45.213824 master-0 kubenswrapper[8018]: I0217 15:03:45.213781 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerStarted","Data":"3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59"} Feb 17 15:03:45.213895 master-0 kubenswrapper[8018]: I0217 15:03:45.213834 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerStarted","Data":"16817c879758d5dca93902f6417f76df9adc387ff018e7fa4b42bb730dfe7417"} Feb 17 15:03:45.214639 master-0 kubenswrapper[8018]: I0217 15:03:45.214610 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:45.219413 master-0 kubenswrapper[8018]: I0217 15:03:45.219377 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:03:45.447612 master-0 kubenswrapper[8018]: I0217 15:03:45.447540 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3c02d7-1607-4305-9380-ba8fc6018b60" path="/var/lib/kubelet/pods/ec3c02d7-1607-4305-9380-ba8fc6018b60/volumes" Feb 17 15:03:45.501655 master-0 kubenswrapper[8018]: I0217 15:03:45.501475 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podStartSLOduration=14.413113906 podStartE2EDuration="20.501433641s" podCreationTimestamp="2026-02-17 15:03:25 +0000 UTC" firstStartedPulling="2026-02-17 15:03:37.007855462 +0000 UTC m=+49.760198502" lastFinishedPulling="2026-02-17 15:03:43.096175187 +0000 UTC m=+55.848518237" observedRunningTime="2026-02-17 15:03:45.496066034 +0000 UTC m=+58.248409084" watchObservedRunningTime="2026-02-17 15:03:45.501433641 +0000 UTC m=+58.253776691" Feb 17 15:03:45.533501 master-0 kubenswrapper[8018]: I0217 15:03:45.532161 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wxhtx" podStartSLOduration=3.877369833 podStartE2EDuration="8.532138398s" podCreationTimestamp="2026-02-17 15:03:37 +0000 UTC" firstStartedPulling="2026-02-17 15:03:39.181423043 +0000 UTC m=+51.933766093" lastFinishedPulling="2026-02-17 15:03:43.836191608 +0000 UTC m=+56.588534658" observedRunningTime="2026-02-17 15:03:45.53138054 +0000 UTC m=+58.283723610" watchObservedRunningTime="2026-02-17 15:03:45.532138398 +0000 UTC m=+58.284481448" Feb 17 15:03:45.563159 master-0 kubenswrapper[8018]: I0217 15:03:45.563078 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podStartSLOduration=7.563056859 podStartE2EDuration="7.563056859s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:03:45.561544913 +0000 UTC m=+58.313887973" watchObservedRunningTime="2026-02-17 15:03:45.563056859 +0000 UTC m=+58.315399909" Feb 17 15:03:45.605594 master-0 kubenswrapper[8018]: I0217 15:03:45.605543 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:45.614654 master-0 kubenswrapper[8018]: I0217 15:03:45.614618 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:03:45.995394 master-0 kubenswrapper[8018]: I0217 15:03:45.995318 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:45.997994 master-0 kubenswrapper[8018]: I0217 15:03:45.997870 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:46.111600 master-0 kubenswrapper[8018]: I0217 15:03:46.111513 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:46.225627 master-0 kubenswrapper[8018]: I0217 15:03:46.225589 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:03:48.233087 master-0 kubenswrapper[8018]: I0217 15:03:48.232875 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2227cd78-2ca2-4a57-90cf-9bccb1a7fb96/installer/0.log" Feb 17 15:03:48.233512 master-0 kubenswrapper[8018]: I0217 15:03:48.233059 8018 generic.go:334] "Generic (PLEG): container finished" podID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" containerID="ebd1e02590d930a55bd73b8292b9b1ea795c71f1b5084718d3a86a771e618ddd" exitCode=1 Feb 17 15:03:48.233512 master-0 kubenswrapper[8018]: I0217 15:03:48.233222 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96","Type":"ContainerDied","Data":"ebd1e02590d930a55bd73b8292b9b1ea795c71f1b5084718d3a86a771e618ddd"} Feb 17 15:03:48.372388 master-0 kubenswrapper[8018]: I0217 15:03:48.372346 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:48.448716 master-0 kubenswrapper[8018]: I0217 15:03:48.448652 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca\") pod \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " Feb 17 15:03:48.448914 master-0 kubenswrapper[8018]: I0217 15:03:48.448732 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert\") pod \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " Feb 17 15:03:48.448914 master-0 kubenswrapper[8018]: I0217 15:03:48.448874 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkrdq\" (UniqueName: \"kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq\") pod \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " Feb 17 15:03:48.448914 master-0 kubenswrapper[8018]: I0217 15:03:48.448906 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config\") pod \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\" (UID: \"d65336fb-5671-4f5b-a5ff-9000eed0fdd3\") " Feb 17 15:03:48.449405 master-0 kubenswrapper[8018]: I0217 15:03:48.449332 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca" (OuterVolumeSpecName: "client-ca") pod "d65336fb-5671-4f5b-a5ff-9000eed0fdd3" (UID: "d65336fb-5671-4f5b-a5ff-9000eed0fdd3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:48.449509 master-0 kubenswrapper[8018]: I0217 15:03:48.449412 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config" (OuterVolumeSpecName: "config") pod "d65336fb-5671-4f5b-a5ff-9000eed0fdd3" (UID: "d65336fb-5671-4f5b-a5ff-9000eed0fdd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:03:48.453763 master-0 kubenswrapper[8018]: I0217 15:03:48.453717 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d65336fb-5671-4f5b-a5ff-9000eed0fdd3" (UID: "d65336fb-5671-4f5b-a5ff-9000eed0fdd3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:03:48.455325 master-0 kubenswrapper[8018]: I0217 15:03:48.455272 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq" (OuterVolumeSpecName: "kube-api-access-kkrdq") pod "d65336fb-5671-4f5b-a5ff-9000eed0fdd3" (UID: "d65336fb-5671-4f5b-a5ff-9000eed0fdd3"). InnerVolumeSpecName "kube-api-access-kkrdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:48.550304 master-0 kubenswrapper[8018]: I0217 15:03:48.550259 8018 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.550304 master-0 kubenswrapper[8018]: I0217 15:03:48.550300 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.550304 master-0 kubenswrapper[8018]: I0217 15:03:48.550311 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkrdq\" (UniqueName: \"kubernetes.io/projected/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-kube-api-access-kkrdq\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.550629 master-0 kubenswrapper[8018]: I0217 15:03:48.550321 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65336fb-5671-4f5b-a5ff-9000eed0fdd3-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.723721 master-0 kubenswrapper[8018]: I0217 15:03:48.723114 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2227cd78-2ca2-4a57-90cf-9bccb1a7fb96/installer/0.log" Feb 17 15:03:48.723721 master-0 kubenswrapper[8018]: I0217 15:03:48.723372 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:48.725595 master-0 kubenswrapper[8018]: I0217 15:03:48.725560 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:03:48.725886 master-0 kubenswrapper[8018]: E0217 15:03:48.725874 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" containerName="installer" Feb 17 15:03:48.725955 master-0 kubenswrapper[8018]: I0217 15:03:48.725946 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" containerName="installer" Feb 17 15:03:48.726025 master-0 kubenswrapper[8018]: E0217 15:03:48.726016 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerName="route-controller-manager" Feb 17 15:03:48.726081 master-0 kubenswrapper[8018]: I0217 15:03:48.726072 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerName="route-controller-manager" Feb 17 15:03:48.726385 master-0 kubenswrapper[8018]: I0217 15:03:48.726373 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" containerName="installer" Feb 17 15:03:48.726475 master-0 kubenswrapper[8018]: I0217 15:03:48.726447 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" containerName="route-controller-manager" Feb 17 15:03:48.727116 master-0 kubenswrapper[8018]: I0217 15:03:48.727103 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.817437 master-0 kubenswrapper[8018]: I0217 15:03:48.817297 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:03:48.855967 master-0 kubenswrapper[8018]: I0217 15:03:48.855912 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock" (OuterVolumeSpecName: "var-lock") pod "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" (UID: "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:48.855967 master-0 kubenswrapper[8018]: I0217 15:03:48.855868 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock\") pod \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " Feb 17 15:03:48.856214 master-0 kubenswrapper[8018]: I0217 15:03:48.855994 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir\") pod \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " Feb 17 15:03:48.856214 master-0 kubenswrapper[8018]: I0217 15:03:48.856028 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access\") pod \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\" (UID: \"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96\") " Feb 17 15:03:48.856297 master-0 kubenswrapper[8018]: I0217 15:03:48.856261 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" (UID: "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:03:48.856941 master-0 kubenswrapper[8018]: I0217 15:03:48.856892 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.857052 master-0 kubenswrapper[8018]: I0217 15:03:48.857023 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.857150 master-0 kubenswrapper[8018]: I0217 15:03:48.857123 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.857674 master-0 kubenswrapper[8018]: I0217 15:03:48.857542 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.858102 master-0 kubenswrapper[8018]: I0217 15:03:48.858065 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.858317 master-0 kubenswrapper[8018]: I0217 15:03:48.858102 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.861653 master-0 kubenswrapper[8018]: I0217 15:03:48.861595 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" (UID: "2227cd78-2ca2-4a57-90cf-9bccb1a7fb96"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:03:48.962162 master-0 kubenswrapper[8018]: I0217 15:03:48.962085 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.962162 master-0 kubenswrapper[8018]: I0217 15:03:48.962165 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.962473 master-0 kubenswrapper[8018]: I0217 15:03:48.962193 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.962473 master-0 kubenswrapper[8018]: I0217 15:03:48.962226 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.962473 master-0 kubenswrapper[8018]: I0217 15:03:48.962262 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:03:48.965414 master-0 kubenswrapper[8018]: I0217 15:03:48.965369 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.967242 master-0 kubenswrapper[8018]: I0217 15:03:48.967207 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:48.968052 master-0 kubenswrapper[8018]: I0217 15:03:48.968017 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:49.036690 master-0 kubenswrapper[8018]: I0217 15:03:49.036592 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:49.053244 master-0 kubenswrapper[8018]: I0217 15:03:49.053200 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:03:49.239343 master-0 kubenswrapper[8018]: I0217 15:03:49.239313 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2227cd78-2ca2-4a57-90cf-9bccb1a7fb96/installer/0.log" Feb 17 15:03:49.239834 master-0 kubenswrapper[8018]: I0217 15:03:49.239396 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2227cd78-2ca2-4a57-90cf-9bccb1a7fb96","Type":"ContainerDied","Data":"52af3dfbfc5cbf5ff7b537f9dbc28ea77baac6fc88f6f51de7838f59c0f56ab1"} Feb 17 15:03:49.239834 master-0 kubenswrapper[8018]: I0217 15:03:49.239472 8018 scope.go:117] "RemoveContainer" containerID="ebd1e02590d930a55bd73b8292b9b1ea795c71f1b5084718d3a86a771e618ddd" Feb 17 15:03:49.239834 master-0 kubenswrapper[8018]: I0217 15:03:49.239480 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 17 15:03:49.241321 master-0 kubenswrapper[8018]: I0217 15:03:49.241272 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" event={"ID":"d65336fb-5671-4f5b-a5ff-9000eed0fdd3","Type":"ContainerDied","Data":"6dc82a228f0a8a739e91f0b1e4c181cb28b029622045918a39fd3d324199188b"} Feb 17 15:03:49.241431 master-0 kubenswrapper[8018]: I0217 15:03:49.241407 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg" Feb 17 15:03:49.254361 master-0 kubenswrapper[8018]: I0217 15:03:49.254313 8018 scope.go:117] "RemoveContainer" containerID="c188c7bd6b9187d110fe690a9041ea1174a1f6faa06d1653091eb88c9dc77813" Feb 17 15:03:49.819815 master-0 kubenswrapper[8018]: I0217 15:03:49.818337 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:03:49.851411 master-0 kubenswrapper[8018]: I0217 15:03:49.851341 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:49.872974 master-0 kubenswrapper[8018]: I0217 15:03:49.871845 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg"] Feb 17 15:03:49.878939 master-0 kubenswrapper[8018]: I0217 15:03:49.877667 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:49.888395 master-0 kubenswrapper[8018]: I0217 15:03:49.888348 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 17 15:03:50.248761 master-0 kubenswrapper[8018]: I0217 15:03:50.248687 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656"} Feb 17 15:03:50.997638 master-0 kubenswrapper[8018]: I0217 15:03:50.997582 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:03:50.998028 master-0 kubenswrapper[8018]: I0217 15:03:50.997924 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" containerName="installer" containerID="cri-o://ebb84869ff87ab53933f534e8072352d2827c34650aa88de3ed7f3c6446e7b63" gracePeriod=30 Feb 17 15:03:51.454604 master-0 kubenswrapper[8018]: I0217 15:03:51.454542 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2227cd78-2ca2-4a57-90cf-9bccb1a7fb96" path="/var/lib/kubelet/pods/2227cd78-2ca2-4a57-90cf-9bccb1a7fb96/volumes" Feb 17 15:03:51.455480 master-0 kubenswrapper[8018]: I0217 15:03:51.455422 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65336fb-5671-4f5b-a5ff-9000eed0fdd3" path="/var/lib/kubelet/pods/d65336fb-5671-4f5b-a5ff-9000eed0fdd3/volumes" Feb 17 15:03:53.813267 master-0 kubenswrapper[8018]: I0217 15:03:53.813213 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-v49tq"] Feb 17 15:03:53.813780 master-0 kubenswrapper[8018]: I0217 15:03:53.813469 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" podUID="4be2df82-c77a-4d26-9498-fa3beea54b81" containerName="cluster-version-operator" containerID="cri-o://53695733f72721a1db3f525ebfe99427ae62ce35e93969fd9d5d4881069cc71d" gracePeriod=130 Feb 17 15:03:54.023559 master-0 kubenswrapper[8018]: I0217 15:03:54.023419 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 17 15:03:54.024559 master-0 kubenswrapper[8018]: I0217 15:03:54.024535 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.027607 master-0 kubenswrapper[8018]: I0217 15:03:54.027550 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:03:54.139321 master-0 kubenswrapper[8018]: I0217 15:03:54.139208 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.139694 master-0 kubenswrapper[8018]: I0217 15:03:54.139340 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.139694 master-0 kubenswrapper[8018]: I0217 15:03:54.139409 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.218131 master-0 kubenswrapper[8018]: I0217 15:03:54.216950 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 17 15:03:54.240919 master-0 kubenswrapper[8018]: I0217 15:03:54.240847 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.240919 master-0 kubenswrapper[8018]: I0217 15:03:54.240906 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.241303 master-0 kubenswrapper[8018]: I0217 15:03:54.241106 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.241303 master-0 kubenswrapper[8018]: I0217 15:03:54.241158 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.241303 master-0 kubenswrapper[8018]: I0217 15:03:54.241261 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.276353 master-0 kubenswrapper[8018]: I0217 15:03:54.276258 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:03:54.278008 master-0 kubenswrapper[8018]: I0217 15:03:54.277980 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:03:54.434997 master-0 kubenswrapper[8018]: I0217 15:03:54.434949 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.614318 master-0 kubenswrapper[8018]: I0217 15:03:54.614238 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t8vtc"] Feb 17 15:03:54.615140 master-0 kubenswrapper[8018]: I0217 15:03:54.615103 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.615872 master-0 kubenswrapper[8018]: I0217 15:03:54.615832 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2lg56"] Feb 17 15:03:54.616541 master-0 kubenswrapper[8018]: I0217 15:03:54.616448 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.625051 master-0 kubenswrapper[8018]: I0217 15:03:54.624995 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dz667" Feb 17 15:03:54.625186 master-0 kubenswrapper[8018]: I0217 15:03:54.625035 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-c8lzf" Feb 17 15:03:54.646026 master-0 kubenswrapper[8018]: I0217 15:03:54.645993 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.646134 master-0 kubenswrapper[8018]: I0217 15:03:54.646048 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.646134 master-0 kubenswrapper[8018]: I0217 15:03:54.646084 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.646224 master-0 kubenswrapper[8018]: I0217 15:03:54.646128 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.646224 master-0 kubenswrapper[8018]: I0217 15:03:54.646159 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.646311 master-0 kubenswrapper[8018]: I0217 15:03:54.646223 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.646584 master-0 kubenswrapper[8018]: I0217 15:03:54.646553 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:03:54.669403 master-0 kubenswrapper[8018]: I0217 15:03:54.669339 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lg56"] Feb 17 15:03:54.669403 master-0 kubenswrapper[8018]: I0217 15:03:54.669394 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8vtc"] Feb 17 15:03:54.744802 master-0 kubenswrapper[8018]: I0217 15:03:54.744367 8018 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 17 15:03:54.745119 master-0 kubenswrapper[8018]: I0217 15:03:54.745069 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" containerID="cri-o://4d0630e2330edb92a7d17fc9b9a41a0b13733df95ae437b7fe0b5957cb60ed7a" gracePeriod=30 Feb 17 15:03:54.745522 master-0 kubenswrapper[8018]: I0217 15:03:54.745419 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" containerID="cri-o://8105fa4b966940334c286ed94a1f0129c72a04a09b1bf683900cc1744fb06fec" gracePeriod=30 Feb 17 15:03:54.746339 master-0 kubenswrapper[8018]: I0217 15:03:54.746303 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:03:54.746580 master-0 kubenswrapper[8018]: E0217 15:03:54.746556 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 17 15:03:54.746580 master-0 kubenswrapper[8018]: I0217 15:03:54.746577 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 17 15:03:54.746662 master-0 kubenswrapper[8018]: E0217 15:03:54.746591 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 17 15:03:54.746662 master-0 kubenswrapper[8018]: I0217 15:03:54.746600 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 17 15:03:54.746734 master-0 kubenswrapper[8018]: I0217 15:03:54.746713 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 17 15:03:54.746780 master-0 kubenswrapper[8018]: I0217 15:03:54.746733 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 17 15:03:54.747709 master-0 kubenswrapper[8018]: I0217 15:03:54.747672 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.747797 master-0 kubenswrapper[8018]: I0217 15:03:54.747764 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.747948 master-0 kubenswrapper[8018]: I0217 15:03:54.747908 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.747948 master-0 kubenswrapper[8018]: I0217 15:03:54.747943 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.748445 master-0 kubenswrapper[8018]: I0217 15:03:54.748399 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.748445 master-0 kubenswrapper[8018]: I0217 15:03:54.748413 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.748542 master-0 kubenswrapper[8018]: I0217 15:03:54.748500 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.748577 master-0 kubenswrapper[8018]: I0217 15:03:54.748545 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.748606 master-0 kubenswrapper[8018]: I0217 15:03:54.748586 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.748925 master-0 kubenswrapper[8018]: I0217 15:03:54.748852 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:03:54.749012 master-0 kubenswrapper[8018]: I0217 15:03:54.748974 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:03:54.849736 master-0 kubenswrapper[8018]: I0217 15:03:54.849655 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.849736 master-0 kubenswrapper[8018]: I0217 15:03:54.849735 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.850226 master-0 kubenswrapper[8018]: I0217 15:03:54.849763 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.850226 master-0 kubenswrapper[8018]: I0217 15:03:54.849801 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.850226 master-0 kubenswrapper[8018]: I0217 15:03:54.849827 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.850226 master-0 kubenswrapper[8018]: I0217 15:03:54.850189 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.951853 master-0 kubenswrapper[8018]: I0217 15:03:54.951733 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.951853 master-0 kubenswrapper[8018]: I0217 15:03:54.951808 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.951853 master-0 kubenswrapper[8018]: I0217 15:03:54.951836 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.951876 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.951913 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.951937 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.952002 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.952041 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.952064 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952082 master-0 kubenswrapper[8018]: I0217 15:03:54.952082 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952331 master-0 kubenswrapper[8018]: I0217 15:03:54.952102 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:54.952331 master-0 kubenswrapper[8018]: I0217 15:03:54.952124 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:03:55.279691 master-0 kubenswrapper[8018]: I0217 15:03:55.279526 8018 generic.go:334] "Generic (PLEG): container finished" podID="4be2df82-c77a-4d26-9498-fa3beea54b81" containerID="53695733f72721a1db3f525ebfe99427ae62ce35e93969fd9d5d4881069cc71d" exitCode=0 Feb 17 15:03:55.279691 master-0 kubenswrapper[8018]: I0217 15:03:55.279679 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" event={"ID":"4be2df82-c77a-4d26-9498-fa3beea54b81","Type":"ContainerDied","Data":"53695733f72721a1db3f525ebfe99427ae62ce35e93969fd9d5d4881069cc71d"} Feb 17 15:03:55.280988 master-0 kubenswrapper[8018]: I0217 15:03:55.280949 8018 generic.go:334] "Generic (PLEG): container finished" podID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerID="107e3fd578a275c186183eec1ef31542c82377b88843f3c540b45cab25720060" exitCode=0 Feb 17 15:03:55.280988 master-0 kubenswrapper[8018]: I0217 15:03:55.280985 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5de71cc1-08c3-4295-ac86-745c9d4fbb46","Type":"ContainerDied","Data":"107e3fd578a275c186183eec1ef31542c82377b88843f3c540b45cab25720060"} Feb 17 15:03:56.865005 master-0 kubenswrapper[8018]: I0217 15:03:56.864927 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:04:07.803562 master-0 kubenswrapper[8018]: E0217 15:04:07.803416 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:04:07.804138 master-0 kubenswrapper[8018]: I0217 15:04:07.804098 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:04:09.368842 master-0 kubenswrapper[8018]: I0217 15:04:09.368749 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" exitCode=1 Feb 17 15:04:09.370064 master-0 kubenswrapper[8018]: I0217 15:04:09.368829 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb"} Feb 17 15:04:09.370064 master-0 kubenswrapper[8018]: I0217 15:04:09.369758 8018 scope.go:117] "RemoveContainer" containerID="65c55fab648b7cfa009d957ded77827dafa84ec5b9a039dcd2a3ab2e04462ef9" Feb 17 15:04:09.370380 master-0 kubenswrapper[8018]: I0217 15:04:09.370337 8018 scope.go:117] "RemoveContainer" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" Feb 17 15:04:09.372831 master-0 kubenswrapper[8018]: I0217 15:04:09.372775 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9f31fcfe-33ed-4e31-a12c-cb344093dcf4/installer/0.log" Feb 17 15:04:09.372953 master-0 kubenswrapper[8018]: I0217 15:04:09.372844 8018 generic.go:334] "Generic (PLEG): container finished" podID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" containerID="ebb84869ff87ab53933f534e8072352d2827c34650aa88de3ed7f3c6446e7b63" exitCode=1 Feb 17 15:04:09.372953 master-0 kubenswrapper[8018]: I0217 15:04:09.372879 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9f31fcfe-33ed-4e31-a12c-cb344093dcf4","Type":"ContainerDied","Data":"ebb84869ff87ab53933f534e8072352d2827c34650aa88de3ed7f3c6446e7b63"} Feb 17 15:04:09.414265 master-0 kubenswrapper[8018]: E0217 15:04:09.414201 8018 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:10.381523 master-0 kubenswrapper[8018]: I0217 15:04:10.381382 8018 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc" exitCode=1 Feb 17 15:04:10.381523 master-0 kubenswrapper[8018]: I0217 15:04:10.381505 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerDied","Data":"4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc"} Feb 17 15:04:10.382320 master-0 kubenswrapper[8018]: I0217 15:04:10.382192 8018 scope.go:117] "RemoveContainer" containerID="4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc" Feb 17 15:04:10.504711 master-0 kubenswrapper[8018]: I0217 15:04:10.504254 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:04:11.133534 master-0 kubenswrapper[8018]: I0217 15:04:11.133335 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:04:12.652276 master-0 kubenswrapper[8018]: I0217 15:04:12.652172 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:04:12.727227 master-0 kubenswrapper[8018]: I0217 15:04:12.727172 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:04:12.884251 master-0 kubenswrapper[8018]: I0217 15:04:12.884160 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") pod \"4be2df82-c77a-4d26-9498-fa3beea54b81\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " Feb 17 15:04:12.884574 master-0 kubenswrapper[8018]: I0217 15:04:12.884299 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "4be2df82-c77a-4d26-9498-fa3beea54b81" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:12.884574 master-0 kubenswrapper[8018]: I0217 15:04:12.884312 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") pod \"4be2df82-c77a-4d26-9498-fa3beea54b81\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " Feb 17 15:04:12.884574 master-0 kubenswrapper[8018]: I0217 15:04:12.884395 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") pod \"4be2df82-c77a-4d26-9498-fa3beea54b81\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " Feb 17 15:04:12.884780 master-0 kubenswrapper[8018]: I0217 15:04:12.884574 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") pod \"4be2df82-c77a-4d26-9498-fa3beea54b81\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " Feb 17 15:04:12.884780 master-0 kubenswrapper[8018]: I0217 15:04:12.884632 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") pod \"4be2df82-c77a-4d26-9498-fa3beea54b81\" (UID: \"4be2df82-c77a-4d26-9498-fa3beea54b81\") " Feb 17 15:04:12.884780 master-0 kubenswrapper[8018]: I0217 15:04:12.884628 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "4be2df82-c77a-4d26-9498-fa3beea54b81" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:12.885146 master-0 kubenswrapper[8018]: I0217 15:04:12.885083 8018 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:12.885146 master-0 kubenswrapper[8018]: I0217 15:04:12.885141 8018 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4be2df82-c77a-4d26-9498-fa3beea54b81-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:12.885505 master-0 kubenswrapper[8018]: I0217 15:04:12.885409 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca" (OuterVolumeSpecName: "service-ca") pod "4be2df82-c77a-4d26-9498-fa3beea54b81" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:04:12.888662 master-0 kubenswrapper[8018]: I0217 15:04:12.888585 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4be2df82-c77a-4d26-9498-fa3beea54b81" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:12.889953 master-0 kubenswrapper[8018]: I0217 15:04:12.889892 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4be2df82-c77a-4d26-9498-fa3beea54b81" (UID: "4be2df82-c77a-4d26-9498-fa3beea54b81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:04:12.986241 master-0 kubenswrapper[8018]: I0217 15:04:12.986093 8018 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4be2df82-c77a-4d26-9498-fa3beea54b81-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:12.986241 master-0 kubenswrapper[8018]: I0217 15:04:12.986144 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be2df82-c77a-4d26-9498-fa3beea54b81-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:12.986241 master-0 kubenswrapper[8018]: I0217 15:04:12.986161 8018 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be2df82-c77a-4d26-9498-fa3beea54b81-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:13.032568 master-0 kubenswrapper[8018]: I0217 15:04:13.032478 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:04:13.403695 master-0 kubenswrapper[8018]: I0217 15:04:13.403609 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" event={"ID":"4be2df82-c77a-4d26-9498-fa3beea54b81","Type":"ContainerDied","Data":"7353f5bcae82d0fc43f2cb4200ebc6c45650c202a8783735da86e6a55c164a80"} Feb 17 15:04:13.404223 master-0 kubenswrapper[8018]: I0217 15:04:13.403740 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-v49tq" Feb 17 15:04:18.438617 master-0 kubenswrapper[8018]: I0217 15:04:18.438509 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/0.log" Feb 17 15:04:18.438617 master-0 kubenswrapper[8018]: I0217 15:04:18.438574 8018 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="66dd210cb26e47fd54a1792f8f197ef08337df2f55d0c4058d8d526e9bd894c8" exitCode=1 Feb 17 15:04:18.438617 master-0 kubenswrapper[8018]: I0217 15:04:18.438618 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"66dd210cb26e47fd54a1792f8f197ef08337df2f55d0c4058d8d526e9bd894c8"} Feb 17 15:04:18.439616 master-0 kubenswrapper[8018]: I0217 15:04:18.439205 8018 scope.go:117] "RemoveContainer" containerID="66dd210cb26e47fd54a1792f8f197ef08337df2f55d0c4058d8d526e9bd894c8" Feb 17 15:04:18.943217 master-0 kubenswrapper[8018]: I0217 15:04:18.943134 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 17 15:04:19.081082 master-0 kubenswrapper[8018]: I0217 15:04:19.081003 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir\") pod \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " Feb 17 15:04:19.081290 master-0 kubenswrapper[8018]: I0217 15:04:19.081125 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access\") pod \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " Feb 17 15:04:19.081290 master-0 kubenswrapper[8018]: I0217 15:04:19.081190 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock\") pod \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\" (UID: \"5de71cc1-08c3-4295-ac86-745c9d4fbb46\") " Feb 17 15:04:19.081290 master-0 kubenswrapper[8018]: I0217 15:04:19.081216 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5de71cc1-08c3-4295-ac86-745c9d4fbb46" (UID: "5de71cc1-08c3-4295-ac86-745c9d4fbb46"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:19.081511 master-0 kubenswrapper[8018]: I0217 15:04:19.081395 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock" (OuterVolumeSpecName: "var-lock") pod "5de71cc1-08c3-4295-ac86-745c9d4fbb46" (UID: "5de71cc1-08c3-4295-ac86-745c9d4fbb46"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:19.081659 master-0 kubenswrapper[8018]: I0217 15:04:19.081615 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:19.081659 master-0 kubenswrapper[8018]: I0217 15:04:19.081651 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5de71cc1-08c3-4295-ac86-745c9d4fbb46-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:19.085839 master-0 kubenswrapper[8018]: I0217 15:04:19.085779 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5de71cc1-08c3-4295-ac86-745c9d4fbb46" (UID: "5de71cc1-08c3-4295-ac86-745c9d4fbb46"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:19.182718 master-0 kubenswrapper[8018]: I0217 15:04:19.182552 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5de71cc1-08c3-4295-ac86-745c9d4fbb46-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:19.414654 master-0 kubenswrapper[8018]: E0217 15:04:19.414555 8018 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:19.447903 master-0 kubenswrapper[8018]: I0217 15:04:19.447741 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 17 15:04:19.453578 master-0 kubenswrapper[8018]: I0217 15:04:19.453496 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5de71cc1-08c3-4295-ac86-745c9d4fbb46","Type":"ContainerDied","Data":"0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7"} Feb 17 15:04:19.453709 master-0 kubenswrapper[8018]: I0217 15:04:19.453592 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7" Feb 17 15:04:21.308882 master-0 kubenswrapper[8018]: I0217 15:04:21.308827 8018 scope.go:117] "RemoveContainer" containerID="53695733f72721a1db3f525ebfe99427ae62ce35e93969fd9d5d4881069cc71d" Feb 17 15:04:21.333787 master-0 kubenswrapper[8018]: W0217 15:04:21.333728 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401699cb53e7098157e808a83125b0e4.slice/crio-cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9 WatchSource:0}: Error finding container cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9: Status 404 returned error can't find the container with id cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9 Feb 17 15:04:21.455650 master-0 kubenswrapper[8018]: I0217 15:04:21.455588 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9f31fcfe-33ed-4e31-a12c-cb344093dcf4/installer/0.log" Feb 17 15:04:21.455846 master-0 kubenswrapper[8018]: I0217 15:04:21.455699 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:04:21.464351 master-0 kubenswrapper[8018]: I0217 15:04:21.464299 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9f31fcfe-33ed-4e31-a12c-cb344093dcf4/installer/0.log" Feb 17 15:04:21.464523 master-0 kubenswrapper[8018]: I0217 15:04:21.464429 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 17 15:04:21.464523 master-0 kubenswrapper[8018]: I0217 15:04:21.464444 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9f31fcfe-33ed-4e31-a12c-cb344093dcf4","Type":"ContainerDied","Data":"78cbd9f546830dd615de766b10a67b6a810a97884bc18b2f0df8903e6fb6fdc5"} Feb 17 15:04:21.466795 master-0 kubenswrapper[8018]: I0217 15:04:21.466754 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9"} Feb 17 15:04:21.519572 master-0 kubenswrapper[8018]: I0217 15:04:21.519522 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir\") pod \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.520292 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock\") pod \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.522376 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access\") pod \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\" (UID: \"9f31fcfe-33ed-4e31-a12c-cb344093dcf4\") " Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.521848 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f31fcfe-33ed-4e31-a12c-cb344093dcf4" (UID: "9f31fcfe-33ed-4e31-a12c-cb344093dcf4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.521905 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock" (OuterVolumeSpecName: "var-lock") pod "9f31fcfe-33ed-4e31-a12c-cb344093dcf4" (UID: "9f31fcfe-33ed-4e31-a12c-cb344093dcf4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.522642 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:21.524694 master-0 kubenswrapper[8018]: I0217 15:04:21.522656 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:21.525771 master-0 kubenswrapper[8018]: I0217 15:04:21.525723 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f31fcfe-33ed-4e31-a12c-cb344093dcf4" (UID: "9f31fcfe-33ed-4e31-a12c-cb344093dcf4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:21.590485 master-0 kubenswrapper[8018]: I0217 15:04:21.590383 8018 scope.go:117] "RemoveContainer" containerID="ebb84869ff87ab53933f534e8072352d2827c34650aa88de3ed7f3c6446e7b63" Feb 17 15:04:21.623956 master-0 kubenswrapper[8018]: I0217 15:04:21.623896 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f31fcfe-33ed-4e31-a12c-cb344093dcf4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:23.485119 master-0 kubenswrapper[8018]: I0217 15:04:23.485047 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"2a42298516500c9bfa084c410231d2a27dee7fceed15779f0b27fd9d1349b2b0"} Feb 17 15:04:23.487550 master-0 kubenswrapper[8018]: I0217 15:04:23.487500 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521"} Feb 17 15:04:23.490239 master-0 kubenswrapper[8018]: I0217 15:04:23.490198 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-662mc" event={"ID":"6cee363d-411b-42ae-8f9f-cfaac068d992","Type":"ContainerStarted","Data":"38f57aee6f8a2095377f9a1b395a88138aca4c68c9ec5b9ab5946f3684eb735f"} Feb 17 15:04:23.492550 master-0 kubenswrapper[8018]: I0217 15:04:23.492436 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d"} Feb 17 15:04:23.494494 master-0 kubenswrapper[8018]: I0217 15:04:23.494426 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651"} Feb 17 15:04:23.497283 master-0 kubenswrapper[8018]: I0217 15:04:23.497243 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/0.log" Feb 17 15:04:23.497427 master-0 kubenswrapper[8018]: I0217 15:04:23.497314 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"304679e66f000484b85f89bc09bd351cba1f664073d85860e51117843af4fd58"} Feb 17 15:04:24.506695 master-0 kubenswrapper[8018]: I0217 15:04:24.506618 8018 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d" exitCode=0 Feb 17 15:04:24.507615 master-0 kubenswrapper[8018]: I0217 15:04:24.506767 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d"} Feb 17 15:04:24.510021 master-0 kubenswrapper[8018]: I0217 15:04:24.509958 8018 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="8105fa4b966940334c286ed94a1f0129c72a04a09b1bf683900cc1744fb06fec" exitCode=0 Feb 17 15:04:24.518152 master-0 kubenswrapper[8018]: I0217 15:04:24.518088 8018 generic.go:334] "Generic (PLEG): container finished" podID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerID="cd6bbd0ec3b9fb226773bb0d8576d75c0a13a8da287e310034b230507b5f7653" exitCode=0 Feb 17 15:04:24.518282 master-0 kubenswrapper[8018]: I0217 15:04:24.518177 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerDied","Data":"cd6bbd0ec3b9fb226773bb0d8576d75c0a13a8da287e310034b230507b5f7653"} Feb 17 15:04:24.520917 master-0 kubenswrapper[8018]: I0217 15:04:24.520870 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:04:24.521014 master-0 kubenswrapper[8018]: I0217 15:04:24.520961 8018 generic.go:334] "Generic (PLEG): container finished" podID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerID="dcdeeb6985f895a6d59b345be94e95ea3c9c558f1f7b7901594a31fa91429102" exitCode=1 Feb 17 15:04:24.521085 master-0 kubenswrapper[8018]: I0217 15:04:24.521008 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"580b240a-a806-454d-ab19-8f193a8d9ca2","Type":"ContainerDied","Data":"dcdeeb6985f895a6d59b345be94e95ea3c9c558f1f7b7901594a31fa91429102"} Feb 17 15:04:24.524212 master-0 kubenswrapper[8018]: I0217 15:04:24.524129 8018 generic.go:334] "Generic (PLEG): container finished" podID="e2994de0-1535-423a-90ce-019043cd4b9d" containerID="10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897" exitCode=0 Feb 17 15:04:24.524402 master-0 kubenswrapper[8018]: I0217 15:04:24.524225 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerDied","Data":"10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897"} Feb 17 15:04:24.527401 master-0 kubenswrapper[8018]: I0217 15:04:24.527176 8018 generic.go:334] "Generic (PLEG): container finished" podID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerID="38f57aee6f8a2095377f9a1b395a88138aca4c68c9ec5b9ab5946f3684eb735f" exitCode=0 Feb 17 15:04:24.527401 master-0 kubenswrapper[8018]: I0217 15:04:24.527268 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-662mc" event={"ID":"6cee363d-411b-42ae-8f9f-cfaac068d992","Type":"ContainerDied","Data":"38f57aee6f8a2095377f9a1b395a88138aca4c68c9ec5b9ab5946f3684eb735f"} Feb 17 15:04:24.530870 master-0 kubenswrapper[8018]: I0217 15:04:24.529943 8018 generic.go:334] "Generic (PLEG): container finished" podID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerID="b5c2e6d14a4a982cd0eb6d59e0401ddb141b046ed17a425be654ccff6ae371f0" exitCode=0 Feb 17 15:04:24.530870 master-0 kubenswrapper[8018]: I0217 15:04:24.530066 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqt6f" event={"ID":"fa4b45c7-fcd1-483b-97ae-df90a7c06f11","Type":"ContainerDied","Data":"b5c2e6d14a4a982cd0eb6d59e0401ddb141b046ed17a425be654ccff6ae371f0"} Feb 17 15:04:24.530870 master-0 kubenswrapper[8018]: I0217 15:04:24.530841 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:04:25.001297 master-0 kubenswrapper[8018]: I0217 15:04:25.001245 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:04:25.006794 master-0 kubenswrapper[8018]: I0217 15:04:25.006750 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-662mc" Feb 17 15:04:25.170811 master-0 kubenswrapper[8018]: I0217 15:04:25.170714 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbkk\" (UniqueName: \"kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk\") pod \"6cee363d-411b-42ae-8f9f-cfaac068d992\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " Feb 17 15:04:25.170811 master-0 kubenswrapper[8018]: I0217 15:04:25.170808 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities\") pod \"6cee363d-411b-42ae-8f9f-cfaac068d992\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " Feb 17 15:04:25.171265 master-0 kubenswrapper[8018]: I0217 15:04:25.170847 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content\") pod \"6cee363d-411b-42ae-8f9f-cfaac068d992\" (UID: \"6cee363d-411b-42ae-8f9f-cfaac068d992\") " Feb 17 15:04:25.171265 master-0 kubenswrapper[8018]: I0217 15:04:25.170904 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities\") pod \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " Feb 17 15:04:25.171265 master-0 kubenswrapper[8018]: I0217 15:04:25.170946 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content\") pod \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " Feb 17 15:04:25.171367 master-0 kubenswrapper[8018]: I0217 15:04:25.171267 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwk82\" (UniqueName: \"kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82\") pod \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\" (UID: \"fa4b45c7-fcd1-483b-97ae-df90a7c06f11\") " Feb 17 15:04:25.172739 master-0 kubenswrapper[8018]: I0217 15:04:25.172681 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities" (OuterVolumeSpecName: "utilities") pod "fa4b45c7-fcd1-483b-97ae-df90a7c06f11" (UID: "fa4b45c7-fcd1-483b-97ae-df90a7c06f11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:04:25.173013 master-0 kubenswrapper[8018]: I0217 15:04:25.172910 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities" (OuterVolumeSpecName: "utilities") pod "6cee363d-411b-42ae-8f9f-cfaac068d992" (UID: "6cee363d-411b-42ae-8f9f-cfaac068d992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:04:25.177829 master-0 kubenswrapper[8018]: I0217 15:04:25.177710 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk" (OuterVolumeSpecName: "kube-api-access-gwbkk") pod "6cee363d-411b-42ae-8f9f-cfaac068d992" (UID: "6cee363d-411b-42ae-8f9f-cfaac068d992"). InnerVolumeSpecName "kube-api-access-gwbkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:25.177829 master-0 kubenswrapper[8018]: I0217 15:04:25.177784 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82" (OuterVolumeSpecName: "kube-api-access-qwk82") pod "fa4b45c7-fcd1-483b-97ae-df90a7c06f11" (UID: "fa4b45c7-fcd1-483b-97ae-df90a7c06f11"). InnerVolumeSpecName "kube-api-access-qwk82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:25.251120 master-0 kubenswrapper[8018]: I0217 15:04:25.251008 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cee363d-411b-42ae-8f9f-cfaac068d992" (UID: "6cee363d-411b-42ae-8f9f-cfaac068d992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:04:25.272804 master-0 kubenswrapper[8018]: I0217 15:04:25.272708 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbkk\" (UniqueName: \"kubernetes.io/projected/6cee363d-411b-42ae-8f9f-cfaac068d992-kube-api-access-gwbkk\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.272804 master-0 kubenswrapper[8018]: I0217 15:04:25.272748 8018 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-utilities\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.272804 master-0 kubenswrapper[8018]: I0217 15:04:25.272760 8018 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cee363d-411b-42ae-8f9f-cfaac068d992-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.272804 master-0 kubenswrapper[8018]: I0217 15:04:25.272770 8018 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-utilities\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.272804 master-0 kubenswrapper[8018]: I0217 15:04:25.272778 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwk82\" (UniqueName: \"kubernetes.io/projected/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-kube-api-access-qwk82\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.285101 master-0 kubenswrapper[8018]: I0217 15:04:25.284992 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa4b45c7-fcd1-483b-97ae-df90a7c06f11" (UID: "fa4b45c7-fcd1-483b-97ae-df90a7c06f11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:04:25.374031 master-0 kubenswrapper[8018]: I0217 15:04:25.373933 8018 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa4b45c7-fcd1-483b-97ae-df90a7c06f11-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:25.530635 master-0 kubenswrapper[8018]: I0217 15:04:25.530520 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:04:25.530635 master-0 kubenswrapper[8018]: I0217 15:04:25.530609 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:25.537392 master-0 kubenswrapper[8018]: I0217 15:04:25.537327 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 17 15:04:25.537573 master-0 kubenswrapper[8018]: I0217 15:04:25.537420 8018 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="4d0630e2330edb92a7d17fc9b9a41a0b13733df95ae437b7fe0b5957cb60ed7a" exitCode=137 Feb 17 15:04:25.540200 master-0 kubenswrapper[8018]: I0217 15:04:25.540118 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-662mc" Feb 17 15:04:25.540372 master-0 kubenswrapper[8018]: I0217 15:04:25.540209 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-662mc" event={"ID":"6cee363d-411b-42ae-8f9f-cfaac068d992","Type":"ContainerDied","Data":"4d2b16ff594ab4bf07b15d7bdb6d613459bd6402bd17141af1161c76a52e5907"} Feb 17 15:04:25.540372 master-0 kubenswrapper[8018]: I0217 15:04:25.540337 8018 scope.go:117] "RemoveContainer" containerID="38f57aee6f8a2095377f9a1b395a88138aca4c68c9ec5b9ab5946f3684eb735f" Feb 17 15:04:25.543051 master-0 kubenswrapper[8018]: I0217 15:04:25.542960 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqt6f" event={"ID":"fa4b45c7-fcd1-483b-97ae-df90a7c06f11","Type":"ContainerDied","Data":"d245dd9e77696551e86dbe4d5f0bbdca0c48334efedc1d3bb182430d7757086e"} Feb 17 15:04:25.543200 master-0 kubenswrapper[8018]: I0217 15:04:25.543078 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqt6f" Feb 17 15:04:25.561008 master-0 kubenswrapper[8018]: I0217 15:04:25.560954 8018 scope.go:117] "RemoveContainer" containerID="5d9b6a180c58e9f4d3551ff59a04c354a85779518bac69727c371d488333fa01" Feb 17 15:04:25.579547 master-0 kubenswrapper[8018]: I0217 15:04:25.579438 8018 scope.go:117] "RemoveContainer" containerID="b5c2e6d14a4a982cd0eb6d59e0401ddb141b046ed17a425be654ccff6ae371f0" Feb 17 15:04:25.607774 master-0 kubenswrapper[8018]: I0217 15:04:25.606336 8018 scope.go:117] "RemoveContainer" containerID="ddd23c1c0a55e91ca0a9f81dbad6adfbdddc033a3e7f4cb986cfedd2d53a44cf" Feb 17 15:04:25.933882 master-0 kubenswrapper[8018]: I0217 15:04:25.933792 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:04:25.933882 master-0 kubenswrapper[8018]: I0217 15:04:25.933873 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:04:26.081586 master-0 kubenswrapper[8018]: I0217 15:04:26.081333 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock\") pod \"580b240a-a806-454d-ab19-8f193a8d9ca2\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " Feb 17 15:04:26.081586 master-0 kubenswrapper[8018]: I0217 15:04:26.081510 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access\") pod \"580b240a-a806-454d-ab19-8f193a8d9ca2\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " Feb 17 15:04:26.081931 master-0 kubenswrapper[8018]: I0217 15:04:26.081605 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir\") pod \"580b240a-a806-454d-ab19-8f193a8d9ca2\" (UID: \"580b240a-a806-454d-ab19-8f193a8d9ca2\") " Feb 17 15:04:26.081931 master-0 kubenswrapper[8018]: I0217 15:04:26.081660 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock" (OuterVolumeSpecName: "var-lock") pod "580b240a-a806-454d-ab19-8f193a8d9ca2" (UID: "580b240a-a806-454d-ab19-8f193a8d9ca2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:26.081931 master-0 kubenswrapper[8018]: I0217 15:04:26.081872 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "580b240a-a806-454d-ab19-8f193a8d9ca2" (UID: "580b240a-a806-454d-ab19-8f193a8d9ca2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:26.082032 master-0 kubenswrapper[8018]: I0217 15:04:26.081994 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:26.082032 master-0 kubenswrapper[8018]: I0217 15:04:26.082014 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b240a-a806-454d-ab19-8f193a8d9ca2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:26.086185 master-0 kubenswrapper[8018]: I0217 15:04:26.086130 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "580b240a-a806-454d-ab19-8f193a8d9ca2" (UID: "580b240a-a806-454d-ab19-8f193a8d9ca2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:04:26.188106 master-0 kubenswrapper[8018]: I0217 15:04:26.184520 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/580b240a-a806-454d-ab19-8f193a8d9ca2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:26.543945 master-0 kubenswrapper[8018]: I0217 15:04:26.543861 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:04:26.544986 master-0 kubenswrapper[8018]: I0217 15:04:26.543977 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:26.555279 master-0 kubenswrapper[8018]: I0217 15:04:26.555233 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:04:26.555434 master-0 kubenswrapper[8018]: I0217 15:04:26.555356 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"580b240a-a806-454d-ab19-8f193a8d9ca2","Type":"ContainerDied","Data":"cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb"} Feb 17 15:04:26.555434 master-0 kubenswrapper[8018]: I0217 15:04:26.555389 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb" Feb 17 15:04:26.555434 master-0 kubenswrapper[8018]: I0217 15:04:26.555403 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:04:26.627929 master-0 kubenswrapper[8018]: I0217 15:04:26.627838 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 17 15:04:26.628115 master-0 kubenswrapper[8018]: I0217 15:04:26.627974 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:04:26.791522 master-0 kubenswrapper[8018]: I0217 15:04:26.791419 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 17 15:04:26.791845 master-0 kubenswrapper[8018]: I0217 15:04:26.791542 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 17 15:04:26.791845 master-0 kubenswrapper[8018]: I0217 15:04:26.791577 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs" (OuterVolumeSpecName: "certs") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:26.791845 master-0 kubenswrapper[8018]: I0217 15:04:26.791752 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir" (OuterVolumeSpecName: "data-dir") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:04:26.792126 master-0 kubenswrapper[8018]: I0217 15:04:26.792051 8018 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:26.792126 master-0 kubenswrapper[8018]: I0217 15:04:26.792086 8018 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:04:27.447338 master-0 kubenswrapper[8018]: I0217 15:04:27.447276 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" path="/var/lib/kubelet/pods/400a178a4d5e9a88ba5bbbd1da2ad15e/volumes" Feb 17 15:04:27.447616 master-0 kubenswrapper[8018]: I0217 15:04:27.447602 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:04:27.565386 master-0 kubenswrapper[8018]: I0217 15:04:27.565286 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 17 15:04:27.566518 master-0 kubenswrapper[8018]: I0217 15:04:27.566439 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:04:28.145531 master-0 kubenswrapper[8018]: I0217 15:04:28.145444 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:04:28.145531 master-0 kubenswrapper[8018]: I0217 15:04:28.145520 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:04:28.750879 master-0 kubenswrapper[8018]: E0217 15:04:28.750827 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:04:28.751336 master-0 kubenswrapper[8018]: E0217 15:04:28.750902 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:04:29.250880201 +0000 UTC m=+102.003223251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:04:28.751336 master-0 kubenswrapper[8018]: E0217 15:04:28.750901 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:04:28.751336 master-0 kubenswrapper[8018]: E0217 15:04:28.750997 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:04:29.250969243 +0000 UTC m=+102.003312303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:04:28.772742 master-0 kubenswrapper[8018]: E0217 15:04:28.772583 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189510ec1e313b2c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:03:54.745027372 +0000 UTC m=+67.497370442,LastTimestamp:2026-02-17 15:03:54.745027372 +0000 UTC m=+67.497370442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:04:29.321992 master-0 kubenswrapper[8018]: I0217 15:04:29.321884 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:04:29.321992 master-0 kubenswrapper[8018]: I0217 15:04:29.322001 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:04:29.415880 master-0 kubenswrapper[8018]: E0217 15:04:29.415768 8018 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:30.055319 master-0 kubenswrapper[8018]: I0217 15:04:30.055217 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:04:30.055916 master-0 kubenswrapper[8018]: I0217 15:04:30.055599 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:34.610868 master-0 kubenswrapper[8018]: I0217 15:04:34.610771 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:04:34.610868 master-0 kubenswrapper[8018]: I0217 15:04:34.610826 8018 generic.go:334] "Generic (PLEG): container finished" podID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerID="848358e86030aaad08f0f93cbd72a6dd3c9d1bf771c63059da694d462594c54f" exitCode=1 Feb 17 15:04:35.651969 master-0 kubenswrapper[8018]: I0217 15:04:35.651860 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:37.519355 master-0 kubenswrapper[8018]: E0217 15:04:37.519296 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:04:38.155474 master-0 kubenswrapper[8018]: I0217 15:04:38.154745 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:04:38.155474 master-0 kubenswrapper[8018]: I0217 15:04:38.154826 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:04:38.641484 master-0 kubenswrapper[8018]: I0217 15:04:38.641371 8018 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="af8466a0f113f0fd847f0bfc35cfb14199d76e2d0ce6a9816135658a53c788cd" exitCode=0 Feb 17 15:04:39.416904 master-0 kubenswrapper[8018]: E0217 15:04:39.416782 8018 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:39.494429 master-0 kubenswrapper[8018]: I0217 15:04:39.494324 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7x72v" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="registry-server" probeResult="failure" output=< Feb 17 15:04:39.494429 master-0 kubenswrapper[8018]: timeout: failed to connect service ":50051" within 1s Feb 17 15:04:39.494429 master-0 kubenswrapper[8018]: > Feb 17 15:04:40.055600 master-0 kubenswrapper[8018]: I0217 15:04:40.055494 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:04:40.056438 master-0 kubenswrapper[8018]: I0217 15:04:40.055616 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:40.178002 master-0 kubenswrapper[8018]: E0217 15:04:40.177787 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:04:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:04:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:04:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:04:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\"],\\\"sizeBytes\\\":443654349},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\"],\\\"sizeBytes\\\":442871962},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\"],\\\"sizeBytes\\\":438101353},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44\\\"],\\\"sizeBytes\\\":433480092}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Feb 17 15:04:42.671234 master-0 kubenswrapper[8018]: I0217 15:04:42.671027 8018 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="0ca9078aff730fc3a330cc56d95ecaf3845aab699d6709c0f7903274534d22bb" exitCode=0 Feb 17 15:04:42.673654 master-0 kubenswrapper[8018]: I0217 15:04:42.673555 8018 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="8e1472c1d1be3f277a2b834719c46bd320c628415b71f468a2bd1ad63cb18ee3" exitCode=0 Feb 17 15:04:45.652890 master-0 kubenswrapper[8018]: I0217 15:04:45.652780 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:46.698430 master-0 kubenswrapper[8018]: I0217 15:04:46.698343 8018 generic.go:334] "Generic (PLEG): container finished" podID="2b167b7b-2280-4c82-ac78-71c57aebe503" containerID="4c453c258107dc05c66b4fe7dfb751fa16a6ada9afb337ed9bd51bf0bf1e157f" exitCode=0 Feb 17 15:04:46.701210 master-0 kubenswrapper[8018]: I0217 15:04:46.701147 8018 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="290f694e7d12ca9521306200e6fad40d6869689c4b381a230ebfe0d9ab67ca09" exitCode=0 Feb 17 15:04:47.709715 master-0 kubenswrapper[8018]: I0217 15:04:47.709634 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/0.log" Feb 17 15:04:47.709715 master-0 kubenswrapper[8018]: I0217 15:04:47.709707 8018 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" containerID="10d84ccff2961ae0ad3f02bc199d5d344c04cfb73f881e75241a2774459f1897" exitCode=255 Feb 17 15:04:48.145969 master-0 kubenswrapper[8018]: I0217 15:04:48.145829 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:04:48.145969 master-0 kubenswrapper[8018]: I0217 15:04:48.145938 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:04:49.417681 master-0 kubenswrapper[8018]: E0217 15:04:49.417573 8018 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:49.417681 master-0 kubenswrapper[8018]: I0217 15:04:49.417652 8018 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:04:50.054241 master-0 kubenswrapper[8018]: I0217 15:04:50.054123 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:04:50.054241 master-0 kubenswrapper[8018]: I0217 15:04:50.054231 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:50.178386 master-0 kubenswrapper[8018]: E0217 15:04:50.178211 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 17 15:04:51.650564 master-0 kubenswrapper[8018]: E0217 15:04:51.650424 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:04:51.738163 master-0 kubenswrapper[8018]: I0217 15:04:51.738060 8018 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86" exitCode=0 Feb 17 15:04:52.756447 master-0 kubenswrapper[8018]: I0217 15:04:52.756354 8018 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="bafb1d40abea56e15a55f39238f52822a8e7d4c344f770507c71ed614feff320" exitCode=0 Feb 17 15:04:53.713130 master-0 kubenswrapper[8018]: I0217 15:04:53.713012 8018 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-pjm6n container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 17 15:04:53.713130 master-0 kubenswrapper[8018]: I0217 15:04:53.713107 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 17 15:04:53.766322 master-0 kubenswrapper[8018]: I0217 15:04:53.766227 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/0.log" Feb 17 15:04:53.766322 master-0 kubenswrapper[8018]: I0217 15:04:53.766306 8018 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651" exitCode=255 Feb 17 15:04:53.768874 master-0 kubenswrapper[8018]: I0217 15:04:53.768825 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/0.log" Feb 17 15:04:53.769436 master-0 kubenswrapper[8018]: I0217 15:04:53.769361 8018 generic.go:334] "Generic (PLEG): container finished" podID="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" containerID="55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902" exitCode=1 Feb 17 15:04:55.283230 master-0 kubenswrapper[8018]: I0217 15:04:55.283100 8018 status_manager.go:851] "Failed to get status for pod" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" pod="openshift-etcd/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 17 15:04:55.652852 master-0 kubenswrapper[8018]: I0217 15:04:55.652723 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:04:58.805448 master-0 kubenswrapper[8018]: I0217 15:04:58.805360 8018 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9" exitCode=0 Feb 17 15:04:59.054079 master-0 kubenswrapper[8018]: I0217 15:04:59.053945 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:04:59.054079 master-0 kubenswrapper[8018]: I0217 15:04:59.054047 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:04:59.054079 master-0 kubenswrapper[8018]: I0217 15:04:59.054074 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:04:59.054602 master-0 kubenswrapper[8018]: I0217 15:04:59.054134 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:04:59.418832 master-0 kubenswrapper[8018]: E0217 15:04:59.418420 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 17 15:05:00.178953 master-0 kubenswrapper[8018]: E0217 15:05:00.178816 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:05:01.451231 master-0 kubenswrapper[8018]: E0217 15:05:01.451106 8018 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:05:01.452160 master-0 kubenswrapper[8018]: E0217 15:05:01.451570 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Feb 17 15:05:01.452160 master-0 kubenswrapper[8018]: I0217 15:05:01.451663 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:05:01.452627 master-0 kubenswrapper[8018]: I0217 15:05:01.452552 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 17 15:05:01.452767 master-0 kubenswrapper[8018]: I0217 15:05:01.452684 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" gracePeriod=30 Feb 17 15:05:01.462430 master-0 kubenswrapper[8018]: I0217 15:05:01.462358 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:05:01.830089 master-0 kubenswrapper[8018]: I0217 15:05:01.829494 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" exitCode=2 Feb 17 15:05:01.832219 master-0 kubenswrapper[8018]: I0217 15:05:01.832171 8018 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="bf1c4446a3533f26fa5487fb18cd78bb806fca2fbee2a1ee4a787dfdef4578a7" exitCode=0 Feb 17 15:05:02.776403 master-0 kubenswrapper[8018]: E0217 15:05:02.776200 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189510f04c329c5e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:12.69673891 +0000 UTC m=+85.449082010,LastTimestamp:2026-02-17 15:04:12.69673891 +0000 UTC m=+85.449082010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:05:02.841334 master-0 kubenswrapper[8018]: I0217 15:05:02.841220 8018 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e" exitCode=0 Feb 17 15:05:03.324810 master-0 kubenswrapper[8018]: E0217 15:05:03.324707 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:03.325095 master-0 kubenswrapper[8018]: E0217 15:05:03.324842 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:05:04.324804528 +0000 UTC m=+137.077147608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:03.325095 master-0 kubenswrapper[8018]: E0217 15:05:03.324707 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:03.325095 master-0 kubenswrapper[8018]: E0217 15:05:03.324980 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:05:04.324943511 +0000 UTC m=+137.077286591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:04.370738 master-0 kubenswrapper[8018]: I0217 15:05:04.370603 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:05:04.370738 master-0 kubenswrapper[8018]: I0217 15:05:04.370743 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:05:09.054985 master-0 kubenswrapper[8018]: I0217 15:05:09.054862 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:09.054985 master-0 kubenswrapper[8018]: I0217 15:05:09.054968 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:09.058411 master-0 kubenswrapper[8018]: I0217 15:05:09.055073 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:09.058411 master-0 kubenswrapper[8018]: I0217 15:05:09.055182 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:09.619840 master-0 kubenswrapper[8018]: E0217 15:05:09.619700 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 17 15:05:10.179575 master-0 kubenswrapper[8018]: E0217 15:05:10.179410 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:05:19.054710 master-0 kubenswrapper[8018]: I0217 15:05:19.054572 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:19.054710 master-0 kubenswrapper[8018]: I0217 15:05:19.054615 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:19.054710 master-0 kubenswrapper[8018]: I0217 15:05:19.054691 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:19.055802 master-0 kubenswrapper[8018]: I0217 15:05:19.054764 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:20.021917 master-0 kubenswrapper[8018]: E0217 15:05:20.021773 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 17 15:05:20.180156 master-0 kubenswrapper[8018]: E0217 15:05:20.180033 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:05:20.180156 master-0 kubenswrapper[8018]: E0217 15:05:20.180099 8018 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:05:22.032970 master-0 kubenswrapper[8018]: E0217 15:05:22.032913 8018 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:05:22.032970 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff" Netns:"/var/run/netns/2766612a-a335-4bdc-94a4-bd48079be634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:05:22.032970 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:05:22.032970 master-0 kubenswrapper[8018]: > Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: E0217 15:05:22.032997 8018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff" Netns:"/var/run/netns/2766612a-a335-4bdc-94a4-bd48079be634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: E0217 15:05:22.033246 8018 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff" Netns:"/var/run/netns/2766612a-a335-4bdc-94a4-bd48079be634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:05:22.033545 master-0 kubenswrapper[8018]: E0217 15:05:22.033331 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(d5655115-c223-42ed-a93d-9d609e55c901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(d5655115-c223-42ed-a93d-9d609e55c901)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff\\\" Netns:\\\"/var/run/netns/2766612a-a335-4bdc-94a4-bd48079be634\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="d5655115-c223-42ed-a93d-9d609e55c901" Feb 17 15:05:22.966986 master-0 kubenswrapper[8018]: I0217 15:05:22.966880 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:05:22.967388 master-0 kubenswrapper[8018]: I0217 15:05:22.967367 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:05:29.054907 master-0 kubenswrapper[8018]: I0217 15:05:29.054774 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:29.054907 master-0 kubenswrapper[8018]: I0217 15:05:29.054876 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:30.822890 master-0 kubenswrapper[8018]: E0217 15:05:30.822742 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 17 15:05:35.465825 master-0 kubenswrapper[8018]: E0217 15:05:35.465742 8018 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:05:35.466873 master-0 kubenswrapper[8018]: E0217 15:05:35.466076 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 17 15:05:35.467416 master-0 kubenswrapper[8018]: I0217 15:05:35.467348 8018 scope.go:117] "RemoveContainer" containerID="b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9" Feb 17 15:05:35.467743 master-0 kubenswrapper[8018]: I0217 15:05:35.467681 8018 scope.go:117] "RemoveContainer" containerID="0ca9078aff730fc3a330cc56d95ecaf3845aab699d6709c0f7903274534d22bb" Feb 17 15:05:35.467910 master-0 kubenswrapper[8018]: I0217 15:05:35.467882 8018 scope.go:117] "RemoveContainer" containerID="8e1472c1d1be3f277a2b834719c46bd320c628415b71f468a2bd1ad63cb18ee3" Feb 17 15:05:35.473188 master-0 kubenswrapper[8018]: I0217 15:05:35.472932 8018 scope.go:117] "RemoveContainer" containerID="290f694e7d12ca9521306200e6fad40d6869689c4b381a230ebfe0d9ab67ca09" Feb 17 15:05:35.473188 master-0 kubenswrapper[8018]: I0217 15:05:35.473145 8018 scope.go:117] "RemoveContainer" containerID="c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86" Feb 17 15:05:35.473316 master-0 kubenswrapper[8018]: I0217 15:05:35.473282 8018 scope.go:117] "RemoveContainer" containerID="55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902" Feb 17 15:05:35.474798 master-0 kubenswrapper[8018]: I0217 15:05:35.473928 8018 scope.go:117] "RemoveContainer" containerID="e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e" Feb 17 15:05:35.476914 master-0 kubenswrapper[8018]: I0217 15:05:35.476445 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:05:36.046141 master-0 kubenswrapper[8018]: I0217 15:05:36.046010 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/0.log" Feb 17 15:05:36.354644 master-0 kubenswrapper[8018]: I0217 15:05:36.354596 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:05:36.354748 master-0 kubenswrapper[8018]: I0217 15:05:36.354671 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:05:36.411418 master-0 kubenswrapper[8018]: I0217 15:05:36.411353 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir\") pod \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " Feb 17 15:05:36.411418 master-0 kubenswrapper[8018]: I0217 15:05:36.411419 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access\") pod \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " Feb 17 15:05:36.411733 master-0 kubenswrapper[8018]: I0217 15:05:36.411470 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03da22e3-956d-4c8a-bfd6-c1778e5d627c" (UID: "03da22e3-956d-4c8a-bfd6-c1778e5d627c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:05:36.411733 master-0 kubenswrapper[8018]: I0217 15:05:36.411490 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock\") pod \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\" (UID: \"03da22e3-956d-4c8a-bfd6-c1778e5d627c\") " Feb 17 15:05:36.411733 master-0 kubenswrapper[8018]: I0217 15:05:36.411529 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock" (OuterVolumeSpecName: "var-lock") pod "03da22e3-956d-4c8a-bfd6-c1778e5d627c" (UID: "03da22e3-956d-4c8a-bfd6-c1778e5d627c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:05:36.411988 master-0 kubenswrapper[8018]: I0217 15:05:36.411947 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:05:36.411988 master-0 kubenswrapper[8018]: I0217 15:05:36.411978 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:05:36.414485 master-0 kubenswrapper[8018]: I0217 15:05:36.414216 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03da22e3-956d-4c8a-bfd6-c1778e5d627c" (UID: "03da22e3-956d-4c8a-bfd6-c1778e5d627c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:05:36.513795 master-0 kubenswrapper[8018]: I0217 15:05:36.513704 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03da22e3-956d-4c8a-bfd6-c1778e5d627c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:05:36.779096 master-0 kubenswrapper[8018]: E0217 15:05:36.778904 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189510f04c338caf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:12.696800431 +0000 UTC m=+85.449143481,LastTimestamp:2026-02-17 15:04:12.696800431 +0000 UTC m=+85.449143481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:05:37.061156 master-0 kubenswrapper[8018]: I0217 15:05:37.061034 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:05:37.061156 master-0 kubenswrapper[8018]: I0217 15:05:37.061154 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:05:38.072901 master-0 kubenswrapper[8018]: I0217 15:05:38.072794 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/0.log" Feb 17 15:05:38.072901 master-0 kubenswrapper[8018]: I0217 15:05:38.072860 8018 generic.go:334] "Generic (PLEG): container finished" podID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" containerID="e96d7161de590628bad20a520afcf9b1363c2b5f7629d556a379b4230528784f" exitCode=1 Feb 17 15:05:38.374223 master-0 kubenswrapper[8018]: E0217 15:05:38.374147 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:38.374420 master-0 kubenswrapper[8018]: E0217 15:05:38.374168 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:38.374420 master-0 kubenswrapper[8018]: E0217 15:05:38.374260 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:05:40.374232016 +0000 UTC m=+173.126575106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:38.374420 master-0 kubenswrapper[8018]: E0217 15:05:38.374305 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:05:40.374284457 +0000 UTC m=+173.126627537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:05:39.054944 master-0 kubenswrapper[8018]: I0217 15:05:39.054813 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:39.054944 master-0 kubenswrapper[8018]: I0217 15:05:39.054925 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:40.400520 master-0 kubenswrapper[8018]: E0217 15:05:40.400244 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:05:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:05:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:05:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:05:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e90d0a6840e7f67900c763906a0628ddf209cb666c54c2dda0f4a84964a5cec\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c71d0b62dff668e0f4be49e4976deda87032ae569a87f53898bd9e5489d8a621\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\"],\\\"sizeBytes\\\":443654349},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\"],\\\"sizeBytes\\\":442871962},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\"],\\\"sizeBytes\\\":438101353}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:05:40.458785 master-0 kubenswrapper[8018]: I0217 15:05:40.458689 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:05:40.458785 master-0 kubenswrapper[8018]: I0217 15:05:40.458783 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:05:42.107213 master-0 kubenswrapper[8018]: I0217 15:05:42.107154 8018 generic.go:334] "Generic (PLEG): container finished" podID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerID="43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2" exitCode=0 Feb 17 15:05:42.424295 master-0 kubenswrapper[8018]: E0217 15:05:42.424198 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 17 15:05:43.115146 master-0 kubenswrapper[8018]: I0217 15:05:43.115083 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/0.log" Feb 17 15:05:43.115146 master-0 kubenswrapper[8018]: I0217 15:05:43.115139 8018 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="99addda3858d20caa2954c52d0e4203716a8b098e6c6d5e147015e80f102e5a9" exitCode=1 Feb 17 15:05:49.054966 master-0 kubenswrapper[8018]: I0217 15:05:49.054867 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:49.055888 master-0 kubenswrapper[8018]: I0217 15:05:49.054962 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:05:50.401240 master-0 kubenswrapper[8018]: E0217 15:05:50.401118 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:05:50.495064 master-0 kubenswrapper[8018]: I0217 15:05:50.494981 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:05:50.495064 master-0 kubenswrapper[8018]: I0217 15:05:50.494995 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:05:50.495064 master-0 kubenswrapper[8018]: I0217 15:05:50.495064 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:05:50.495380 master-0 kubenswrapper[8018]: I0217 15:05:50.495119 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:05:51.164326 master-0 kubenswrapper[8018]: I0217 15:05:51.164261 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/0.log" Feb 17 15:05:51.164326 master-0 kubenswrapper[8018]: I0217 15:05:51.164322 8018 generic.go:334] "Generic (PLEG): container finished" podID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerID="c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7" exitCode=1 Feb 17 15:05:54.186492 master-0 kubenswrapper[8018]: I0217 15:05:54.186380 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/1.log" Feb 17 15:05:54.189248 master-0 kubenswrapper[8018]: I0217 15:05:54.189171 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/0.log" Feb 17 15:05:54.189421 master-0 kubenswrapper[8018]: I0217 15:05:54.189368 8018 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="304679e66f000484b85f89bc09bd351cba1f664073d85860e51117843af4fd58" exitCode=255 Feb 17 15:05:55.096372 master-0 kubenswrapper[8018]: I0217 15:05:55.096260 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:05:55.096372 master-0 kubenswrapper[8018]: I0217 15:05:55.096363 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:05:55.288861 master-0 kubenswrapper[8018]: I0217 15:05:55.288732 8018 status_manager.go:851] "Failed to get status for pod" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" pod="openshift-marketplace/redhat-operators-7x72v" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-operators-7x72v)" Feb 17 15:05:55.625737 master-0 kubenswrapper[8018]: E0217 15:05:55.625648 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 17 15:05:57.671731 master-0 kubenswrapper[8018]: E0217 15:05:57.671620 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zr2dv], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/community-operators-t8vtc" podUID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" Feb 17 15:05:57.698250 master-0 kubenswrapper[8018]: E0217 15:05:57.698140 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7gwpz], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/certified-operators-2lg56" podUID="fc216ba1-144a-4cc8-93db-85ab558a166a" Feb 17 15:05:58.213852 master-0 kubenswrapper[8018]: I0217 15:05:58.213739 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:05:58.214132 master-0 kubenswrapper[8018]: I0217 15:05:58.213781 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:05:59.054794 master-0 kubenswrapper[8018]: I0217 15:05:59.054718 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:05:59.055850 master-0 kubenswrapper[8018]: I0217 15:05:59.054814 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:06:00.402258 master-0 kubenswrapper[8018]: E0217 15:06:00.402136 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:00.495956 master-0 kubenswrapper[8018]: I0217 15:06:00.495831 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:00.495956 master-0 kubenswrapper[8018]: I0217 15:06:00.495941 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:00.497767 master-0 kubenswrapper[8018]: I0217 15:06:00.495277 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:00.498077 master-0 kubenswrapper[8018]: I0217 15:06:00.497777 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:03.251709 master-0 kubenswrapper[8018]: I0217 15:06:03.251640 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/0.log" Feb 17 15:06:03.252861 master-0 kubenswrapper[8018]: I0217 15:06:03.252406 8018 generic.go:334] "Generic (PLEG): container finished" podID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerID="e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4" exitCode=1 Feb 17 15:06:03.256426 master-0 kubenswrapper[8018]: I0217 15:06:03.256344 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="0d43de2c98bf528ec1d0c3755bf0e52b97588f5907fd26bee582cfe625d16663" exitCode=1 Feb 17 15:06:05.096217 master-0 kubenswrapper[8018]: I0217 15:06:05.096075 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:05.096217 master-0 kubenswrapper[8018]: I0217 15:06:05.096182 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:05.097598 master-0 kubenswrapper[8018]: I0217 15:06:05.096353 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:05.097598 master-0 kubenswrapper[8018]: I0217 15:06:05.096519 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:09.054493 master-0 kubenswrapper[8018]: I0217 15:06:09.054383 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:06:09.055373 master-0 kubenswrapper[8018]: I0217 15:06:09.054523 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:06:09.480025 master-0 kubenswrapper[8018]: E0217 15:06:09.479938 8018 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:06:09.480397 master-0 kubenswrapper[8018]: E0217 15:06:09.480328 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 17 15:06:09.480760 master-0 kubenswrapper[8018]: I0217 15:06:09.480674 8018 scope.go:117] "RemoveContainer" containerID="8105fa4b966940334c286ed94a1f0129c72a04a09b1bf683900cc1744fb06fec" Feb 17 15:06:09.492347 master-0 kubenswrapper[8018]: I0217 15:06:09.492280 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:06:09.501712 master-0 kubenswrapper[8018]: I0217 15:06:09.501651 8018 scope.go:117] "RemoveContainer" containerID="4d0630e2330edb92a7d17fc9b9a41a0b13733df95ae437b7fe0b5957cb60ed7a" Feb 17 15:06:10.403373 master-0 kubenswrapper[8018]: E0217 15:06:10.403277 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:10.495616 master-0 kubenswrapper[8018]: I0217 15:06:10.495416 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:10.495616 master-0 kubenswrapper[8018]: I0217 15:06:10.495541 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:10.498683 master-0 kubenswrapper[8018]: I0217 15:06:10.498589 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:10.498832 master-0 kubenswrapper[8018]: I0217 15:06:10.498716 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:10.782621 master-0 kubenswrapper[8018]: E0217 15:06:10.782246 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{openshift-controller-manager-operator-5f5f84757d-dsfkk.189510dea01f6706 openshift-controller-manager-operator 4014 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-5f5f84757d-dsfkk,UID:c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda,APIVersion:v1,ResourceVersion:3676,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:02:56 +0000 UTC,LastTimestamp:2026-02-17 15:04:18.880585935 +0000 UTC m=+91.632929005,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:06:11.418431 master-0 kubenswrapper[8018]: I0217 15:06:11.418309 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:11.419697 master-0 kubenswrapper[8018]: I0217 15:06:11.418437 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:12.028059 master-0 kubenswrapper[8018]: E0217 15:06:12.027880 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:06:14.462531 master-0 kubenswrapper[8018]: E0217 15:06:14.462419 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:14.462531 master-0 kubenswrapper[8018]: E0217 15:06:14.462442 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:14.463375 master-0 kubenswrapper[8018]: E0217 15:06:14.462582 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:06:18.462554908 +0000 UTC m=+211.214897988 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:14.463375 master-0 kubenswrapper[8018]: E0217 15:06:14.462616 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:06:18.462602509 +0000 UTC m=+211.214945599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:15.096616 master-0 kubenswrapper[8018]: I0217 15:06:15.096510 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:15.096897 master-0 kubenswrapper[8018]: I0217 15:06:15.096621 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:18.562306 master-0 kubenswrapper[8018]: I0217 15:06:18.562219 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:06:18.562306 master-0 kubenswrapper[8018]: I0217 15:06:18.562320 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:06:19.054910 master-0 kubenswrapper[8018]: I0217 15:06:19.054837 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:06:19.055266 master-0 kubenswrapper[8018]: I0217 15:06:19.054927 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:06:20.403827 master-0 kubenswrapper[8018]: E0217 15:06:20.403726 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:20.403827 master-0 kubenswrapper[8018]: E0217 15:06:20.403779 8018 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:06:20.495291 master-0 kubenswrapper[8018]: I0217 15:06:20.495174 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:20.495291 master-0 kubenswrapper[8018]: I0217 15:06:20.495254 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:21.417961 master-0 kubenswrapper[8018]: I0217 15:06:21.417882 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:21.418932 master-0 kubenswrapper[8018]: I0217 15:06:21.417964 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:21.418932 master-0 kubenswrapper[8018]: I0217 15:06:21.417896 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:21.418932 master-0 kubenswrapper[8018]: I0217 15:06:21.418042 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:23.744354 master-0 kubenswrapper[8018]: E0217 15:06:23.744269 8018 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:06:23.744354 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733" Netns:"/var/run/netns/3d0983f6-5926-494a-b7e7-8e345122a0c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:06:23.744354 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:06:23.744354 master-0 kubenswrapper[8018]: > Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: E0217 15:06:23.744374 8018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733" Netns:"/var/run/netns/3d0983f6-5926-494a-b7e7-8e345122a0c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: E0217 15:06:23.744406 8018 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733" Netns:"/var/run/netns/3d0983f6-5926-494a-b7e7-8e345122a0c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:06:23.745316 master-0 kubenswrapper[8018]: E0217 15:06:23.744537 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(d5655115-c223-42ed-a93d-9d609e55c901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(d5655115-c223-42ed-a93d-9d609e55c901)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733\\\" Netns:\\\"/var/run/netns/3d0983f6-5926-494a-b7e7-8e345122a0c6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="d5655115-c223-42ed-a93d-9d609e55c901" Feb 17 15:06:24.389140 master-0 kubenswrapper[8018]: I0217 15:06:24.389016 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:06:24.389836 master-0 kubenswrapper[8018]: I0217 15:06:24.389784 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:06:25.096217 master-0 kubenswrapper[8018]: I0217 15:06:25.096094 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:25.096217 master-0 kubenswrapper[8018]: I0217 15:06:25.096200 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:25.097236 master-0 kubenswrapper[8018]: I0217 15:06:25.096428 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:25.097236 master-0 kubenswrapper[8018]: I0217 15:06:25.096555 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:29.029209 master-0 kubenswrapper[8018]: E0217 15:06:29.029128 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:06:29.054718 master-0 kubenswrapper[8018]: I0217 15:06:29.054609 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:06:29.054908 master-0 kubenswrapper[8018]: I0217 15:06:29.054698 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:06:30.497766 master-0 kubenswrapper[8018]: I0217 15:06:30.495987 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:30.497766 master-0 kubenswrapper[8018]: I0217 15:06:30.496066 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:31.417700 master-0 kubenswrapper[8018]: I0217 15:06:31.417649 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:31.418080 master-0 kubenswrapper[8018]: I0217 15:06:31.418038 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:32.439024 master-0 kubenswrapper[8018]: I0217 15:06:32.438932 8018 generic.go:334] "Generic (PLEG): container finished" podID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerID="3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59" exitCode=0 Feb 17 15:06:33.905526 master-0 kubenswrapper[8018]: I0217 15:06:33.905410 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:33.906587 master-0 kubenswrapper[8018]: I0217 15:06:33.905561 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:33.906587 master-0 kubenswrapper[8018]: I0217 15:06:33.905583 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:33.906587 master-0 kubenswrapper[8018]: I0217 15:06:33.905729 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:35.096276 master-0 kubenswrapper[8018]: I0217 15:06:35.096170 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:35.096941 master-0 kubenswrapper[8018]: I0217 15:06:35.096289 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:39.055262 master-0 kubenswrapper[8018]: I0217 15:06:39.055148 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:06:39.056086 master-0 kubenswrapper[8018]: I0217 15:06:39.055252 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:06:40.486686 master-0 kubenswrapper[8018]: I0217 15:06:40.486577 8018 generic.go:334] "Generic (PLEG): container finished" podID="31e31afc-79d5-46f4-9835-0fd11da9465f" containerID="a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab" exitCode=0 Feb 17 15:06:40.495649 master-0 kubenswrapper[8018]: I0217 15:06:40.495513 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:40.495649 master-0 kubenswrapper[8018]: I0217 15:06:40.495608 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:40.703141 master-0 kubenswrapper[8018]: E0217 15:06:40.702854 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:06:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:06:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:06:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:06:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e90d0a6840e7f67900c763906a0628ddf209cb666c54c2dda0f4a84964a5cec\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c71d0b62dff668e0f4be49e4976deda87032ae569a87f53898bd9e5489d8a621\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\"],\\\"sizeBytes\\\":443654349},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\"],\\\"sizeBytes\\\":442871962},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\"],\\\"sizeBytes\\\":438101353}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:41.418321 master-0 kubenswrapper[8018]: I0217 15:06:41.418247 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:41.418321 master-0 kubenswrapper[8018]: I0217 15:06:41.418264 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:41.418321 master-0 kubenswrapper[8018]: I0217 15:06:41.418319 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:41.418831 master-0 kubenswrapper[8018]: I0217 15:06:41.418342 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:43.496311 master-0 kubenswrapper[8018]: E0217 15:06:43.496209 8018 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:06:43.497016 master-0 kubenswrapper[8018]: E0217 15:06:43.496522 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 17 15:06:43.497016 master-0 kubenswrapper[8018]: I0217 15:06:43.496566 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerStarted","Data":"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519"} Feb 17 15:06:43.497378 master-0 kubenswrapper[8018]: I0217 15:06:43.497315 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"03da22e3-956d-4c8a-bfd6-c1778e5d627c","Type":"ContainerDied","Data":"848358e86030aaad08f0f93cbd72a6dd3c9d1bf771c63059da694d462594c54f"} Feb 17 15:06:43.497378 master-0 kubenswrapper[8018]: I0217 15:06:43.497374 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerStarted","Data":"7397d4596fe2a2dae9588ce30d943b39077360c93f90cf8337de17c411fc2457"} Feb 17 15:06:43.498227 master-0 kubenswrapper[8018]: I0217 15:06:43.498168 8018 scope.go:117] "RemoveContainer" containerID="4c453c258107dc05c66b4fe7dfb751fa16a6ada9afb337ed9bd51bf0bf1e157f" Feb 17 15:06:43.498303 master-0 kubenswrapper[8018]: I0217 15:06:43.498265 8018 scope.go:117] "RemoveContainer" containerID="bf1c4446a3533f26fa5487fb18cd78bb806fca2fbee2a1ee4a787dfdef4578a7" Feb 17 15:06:43.498519 master-0 kubenswrapper[8018]: I0217 15:06:43.498432 8018 scope.go:117] "RemoveContainer" containerID="d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651" Feb 17 15:06:43.498632 master-0 kubenswrapper[8018]: I0217 15:06:43.498579 8018 scope.go:117] "RemoveContainer" containerID="10d84ccff2961ae0ad3f02bc199d5d344c04cfb73f881e75241a2774459f1897" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.499507 8018 scope.go:117] "RemoveContainer" containerID="a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500046 8018 scope.go:117] "RemoveContainer" containerID="304679e66f000484b85f89bc09bd351cba1f664073d85860e51117843af4fd58" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500306 8018 scope.go:117] "RemoveContainer" containerID="0d43de2c98bf528ec1d0c3755bf0e52b97588f5907fd26bee582cfe625d16663" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500110 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500846 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500867 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"af8466a0f113f0fd847f0bfc35cfb14199d76e2d0ce6a9816135658a53c788cd"} Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500904 8018 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500919 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500937 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.500954 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:06:43.502270 master-0 kubenswrapper[8018]: I0217 15:06:43.501063 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:06:43.518659 master-0 kubenswrapper[8018]: I0217 15:06:43.518592 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:06:43.905232 master-0 kubenswrapper[8018]: I0217 15:06:43.905188 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:43.905404 master-0 kubenswrapper[8018]: I0217 15:06:43.905244 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:43.905404 master-0 kubenswrapper[8018]: I0217 15:06:43.905307 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:43.905404 master-0 kubenswrapper[8018]: I0217 15:06:43.905322 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:44.519670 master-0 kubenswrapper[8018]: I0217 15:06:44.519578 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/0.log" Feb 17 15:06:44.529023 master-0 kubenswrapper[8018]: I0217 15:06:44.528982 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/0.log" Feb 17 15:06:44.532445 master-0 kubenswrapper[8018]: I0217 15:06:44.532411 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/1.log" Feb 17 15:06:44.533316 master-0 kubenswrapper[8018]: I0217 15:06:44.533269 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/0.log" Feb 17 15:06:44.785615 master-0 kubenswrapper[8018]: E0217 15:06:44.785169 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-xqt6f.189510f24cd51ddd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-xqt6f,UID:fa4b45c7-fcd1-483b-97ae-df90a7c06f11,APIVersion:v1,ResourceVersion:7041,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 41.187s (41.187s including waiting). Image size: 1234637517 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:21.297323485 +0000 UTC m=+94.049666555,LastTimestamp:2026-02-17 15:04:21.297323485 +0000 UTC m=+94.049666555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:06:45.096731 master-0 kubenswrapper[8018]: I0217 15:06:45.096527 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:45.096731 master-0 kubenswrapper[8018]: I0217 15:06:45.096527 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:45.096731 master-0 kubenswrapper[8018]: I0217 15:06:45.096667 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:45.097111 master-0 kubenswrapper[8018]: I0217 15:06:45.096719 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:45.530413 master-0 kubenswrapper[8018]: I0217 15:06:45.530324 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:06:45.530413 master-0 kubenswrapper[8018]: I0217 15:06:45.530403 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:46.030954 master-0 kubenswrapper[8018]: E0217 15:06:46.030867 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:06:49.146149 master-0 kubenswrapper[8018]: I0217 15:06:49.146064 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:06:49.146966 master-0 kubenswrapper[8018]: I0217 15:06:49.146163 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:50.055708 master-0 kubenswrapper[8018]: I0217 15:06:50.055638 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:06:50.056096 master-0 kubenswrapper[8018]: I0217 15:06:50.056050 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:50.494816 master-0 kubenswrapper[8018]: I0217 15:06:50.494737 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:06:50.495500 master-0 kubenswrapper[8018]: I0217 15:06:50.495030 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:06:50.703755 master-0 kubenswrapper[8018]: E0217 15:06:50.703670 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:51.418259 master-0 kubenswrapper[8018]: I0217 15:06:51.418208 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:06:51.418921 master-0 kubenswrapper[8018]: I0217 15:06:51.418841 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:06:52.567584 master-0 kubenswrapper[8018]: E0217 15:06:52.567397 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:52.567584 master-0 kubenswrapper[8018]: E0217 15:06:52.567525 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:07:00.567494054 +0000 UTC m=+253.319837124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:52.568682 master-0 kubenswrapper[8018]: E0217 15:06:52.568109 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:52.568682 master-0 kubenswrapper[8018]: E0217 15:06:52.568238 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:07:00.568213042 +0000 UTC m=+253.320556102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:06:53.905237 master-0 kubenswrapper[8018]: I0217 15:06:53.905145 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:53.905237 master-0 kubenswrapper[8018]: I0217 15:06:53.905220 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:53.906155 master-0 kubenswrapper[8018]: I0217 15:06:53.905352 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:06:53.906155 master-0 kubenswrapper[8018]: I0217 15:06:53.905425 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:06:55.096675 master-0 kubenswrapper[8018]: I0217 15:06:55.096540 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:06:55.096675 master-0 kubenswrapper[8018]: I0217 15:06:55.096666 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:06:55.290399 master-0 kubenswrapper[8018]: I0217 15:06:55.290305 8018 status_manager.go:851] "Failed to get status for pod" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" pod="openshift-marketplace/community-operators-662mc" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods community-operators-662mc)" Feb 17 15:06:55.653137 master-0 kubenswrapper[8018]: I0217 15:06:55.652996 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:06:56.508157 master-0 kubenswrapper[8018]: E0217 15:06:56.508069 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:06:59.145493 master-0 kubenswrapper[8018]: I0217 15:06:59.145312 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:06:59.146361 master-0 kubenswrapper[8018]: I0217 15:06:59.145505 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:00.055035 master-0 kubenswrapper[8018]: I0217 15:07:00.054902 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:00.055374 master-0 kubenswrapper[8018]: I0217 15:07:00.055030 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:00.494929 master-0 kubenswrapper[8018]: I0217 15:07:00.494840 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:07:00.494929 master-0 kubenswrapper[8018]: I0217 15:07:00.494919 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:07:00.625372 master-0 kubenswrapper[8018]: I0217 15:07:00.625272 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:07:00.625696 master-0 kubenswrapper[8018]: I0217 15:07:00.625395 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:07:00.704783 master-0 kubenswrapper[8018]: E0217 15:07:00.704658 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:01.418716 master-0 kubenswrapper[8018]: I0217 15:07:01.418625 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:07:01.419074 master-0 kubenswrapper[8018]: I0217 15:07:01.418734 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:07:01.419074 master-0 kubenswrapper[8018]: I0217 15:07:01.418644 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:07:01.419074 master-0 kubenswrapper[8018]: I0217 15:07:01.418870 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:07:03.031848 master-0 kubenswrapper[8018]: E0217 15:07:03.031718 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:07:03.905263 master-0 kubenswrapper[8018]: I0217 15:07:03.905178 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:07:03.905571 master-0 kubenswrapper[8018]: I0217 15:07:03.905282 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:07:05.095877 master-0 kubenswrapper[8018]: I0217 15:07:05.095799 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:07:05.096440 master-0 kubenswrapper[8018]: I0217 15:07:05.095875 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:07:05.653360 master-0 kubenswrapper[8018]: I0217 15:07:05.653220 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:06.699907 master-0 kubenswrapper[8018]: I0217 15:07:06.699791 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/1.log" Feb 17 15:07:06.701479 master-0 kubenswrapper[8018]: I0217 15:07:06.701387 8018 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="748ddd89ff1e149998fbf333fbd90fc60ec09c72d81c0bd70bffe49c3c2956e5" exitCode=255 Feb 17 15:07:06.702978 master-0 kubenswrapper[8018]: I0217 15:07:06.702931 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/1.log" Feb 17 15:07:06.703746 master-0 kubenswrapper[8018]: I0217 15:07:06.703701 8018 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="592fd1f4489b192ac6dc0d5fe3d0dffa1e8d7c60b36c2ffccbe5d580e08d861a" exitCode=255 Feb 17 15:07:06.705500 master-0 kubenswrapper[8018]: I0217 15:07:06.705470 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/1.log" Feb 17 15:07:06.706004 master-0 kubenswrapper[8018]: I0217 15:07:06.705965 8018 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="340573b8d1d2fd7984cea5fe0c4a8980e05ea1fdc083142e4116628f70afce5b" exitCode=255 Feb 17 15:07:06.707771 master-0 kubenswrapper[8018]: I0217 15:07:06.707718 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/1.log" Feb 17 15:07:06.708356 master-0 kubenswrapper[8018]: I0217 15:07:06.708298 8018 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="99575ed26994aa5ecd0c47b8a6bc5878c7ca9d6e22edcdacbfec6cc81ef72b03" exitCode=255 Feb 17 15:07:06.709934 master-0 kubenswrapper[8018]: I0217 15:07:06.709902 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/1.log" Feb 17 15:07:06.710390 master-0 kubenswrapper[8018]: I0217 15:07:06.710344 8018 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="b7412e68637ba105d252df621478eb608de8c9219211183f7a22988f3e676f09" exitCode=255 Feb 17 15:07:06.711946 master-0 kubenswrapper[8018]: I0217 15:07:06.711894 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/1.log" Feb 17 15:07:06.712234 master-0 kubenswrapper[8018]: I0217 15:07:06.712193 8018 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="5f1383fa29670e8399de14c8b9f6cb880364f1cbb05c5a18de5ffeee2b6f9305" exitCode=255 Feb 17 15:07:08.145499 master-0 kubenswrapper[8018]: I0217 15:07:08.145428 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:07:08.146104 master-0 kubenswrapper[8018]: I0217 15:07:08.145510 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:07:10.053899 master-0 kubenswrapper[8018]: I0217 15:07:10.053759 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:10.055206 master-0 kubenswrapper[8018]: I0217 15:07:10.053887 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:10.495144 master-0 kubenswrapper[8018]: I0217 15:07:10.495017 8018 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:07:10.496098 master-0 kubenswrapper[8018]: I0217 15:07:10.495677 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:07:10.705781 master-0 kubenswrapper[8018]: E0217 15:07:10.705672 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:11.417435 master-0 kubenswrapper[8018]: I0217 15:07:11.417374 8018 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:07:11.417435 master-0 kubenswrapper[8018]: I0217 15:07:11.417433 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:07:11.648943 master-0 kubenswrapper[8018]: E0217 15:07:11.648874 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:13.905035 master-0 kubenswrapper[8018]: I0217 15:07:13.904957 8018 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:07:13.905581 master-0 kubenswrapper[8018]: I0217 15:07:13.905060 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:07:14.763049 master-0 kubenswrapper[8018]: I0217 15:07:14.762968 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/1.log" Feb 17 15:07:14.763779 master-0 kubenswrapper[8018]: I0217 15:07:14.763728 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/0.log" Feb 17 15:07:14.763850 master-0 kubenswrapper[8018]: I0217 15:07:14.763802 8018 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" exitCode=255 Feb 17 15:07:15.096764 master-0 kubenswrapper[8018]: I0217 15:07:15.096545 8018 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:07:15.096764 master-0 kubenswrapper[8018]: I0217 15:07:15.096645 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:07:15.653107 master-0 kubenswrapper[8018]: I0217 15:07:15.652972 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:17.522042 master-0 kubenswrapper[8018]: E0217 15:07:17.521959 8018 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: E0217 15:07:17.522193 8018 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522225 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"0ca9078aff730fc3a330cc56d95ecaf3845aab699d6709c0f7903274534d22bb"} Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522383 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522404 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerDied","Data":"8e1472c1d1be3f277a2b834719c46bd320c628415b71f468a2bd1ad63cb18ee3"} Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522428 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522531 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerDied","Data":"4c453c258107dc05c66b4fe7dfb751fa16a6ada9afb337ed9bd51bf0bf1e157f"} Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522567 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:07:17.522856 master-0 kubenswrapper[8018]: I0217 15:07:17.522591 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerDied","Data":"290f694e7d12ca9521306200e6fad40d6869689c4b381a230ebfe0d9ab67ca09"} Feb 17 15:07:17.523413 master-0 kubenswrapper[8018]: I0217 15:07:17.522886 8018 scope.go:117] "RemoveContainer" containerID="0ca9078aff730fc3a330cc56d95ecaf3845aab699d6709c0f7903274534d22bb" Feb 17 15:07:17.523523 master-0 kubenswrapper[8018]: I0217 15:07:17.523407 8018 scope.go:117] "RemoveContainer" containerID="b7412e68637ba105d252df621478eb608de8c9219211183f7a22988f3e676f09" Feb 17 15:07:17.526889 master-0 kubenswrapper[8018]: I0217 15:07:17.524886 8018 scope.go:117] "RemoveContainer" containerID="c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7" Feb 17 15:07:17.526889 master-0 kubenswrapper[8018]: I0217 15:07:17.525189 8018 scope.go:117] "RemoveContainer" containerID="748ddd89ff1e149998fbf333fbd90fc60ec09c72d81c0bd70bffe49c3c2956e5" Feb 17 15:07:17.526889 master-0 kubenswrapper[8018]: I0217 15:07:17.525448 8018 scope.go:117] "RemoveContainer" containerID="99addda3858d20caa2954c52d0e4203716a8b098e6c6d5e147015e80f102e5a9" Feb 17 15:07:17.526889 master-0 kubenswrapper[8018]: I0217 15:07:17.525902 8018 scope.go:117] "RemoveContainer" containerID="e96d7161de590628bad20a520afcf9b1363c2b5f7629d556a379b4230528784f" Feb 17 15:07:17.526889 master-0 kubenswrapper[8018]: I0217 15:07:17.526325 8018 scope.go:117] "RemoveContainer" containerID="5f1383fa29670e8399de14c8b9f6cb880364f1cbb05c5a18de5ffeee2b6f9305" Feb 17 15:07:17.527677 master-0 kubenswrapper[8018]: I0217 15:07:17.527189 8018 scope.go:117] "RemoveContainer" containerID="43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2" Feb 17 15:07:17.529585 master-0 kubenswrapper[8018]: I0217 15:07:17.528217 8018 scope.go:117] "RemoveContainer" containerID="e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4" Feb 17 15:07:17.529585 master-0 kubenswrapper[8018]: I0217 15:07:17.529553 8018 scope.go:117] "RemoveContainer" containerID="3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59" Feb 17 15:07:17.529978 master-0 kubenswrapper[8018]: I0217 15:07:17.529757 8018 scope.go:117] "RemoveContainer" containerID="99575ed26994aa5ecd0c47b8a6bc5878c7ca9d6e22edcdacbfec6cc81ef72b03" Feb 17 15:07:17.538979 master-0 kubenswrapper[8018]: I0217 15:07:17.538009 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 17 15:07:17.579134 master-0 kubenswrapper[8018]: I0217 15:07:17.579068 8018 scope.go:117] "RemoveContainer" containerID="8e1472c1d1be3f277a2b834719c46bd320c628415b71f468a2bd1ad63cb18ee3" Feb 17 15:07:17.663995 master-0 kubenswrapper[8018]: I0217 15:07:17.663940 8018 scope.go:117] "RemoveContainer" containerID="290f694e7d12ca9521306200e6fad40d6869689c4b381a230ebfe0d9ab67ca09" Feb 17 15:07:17.785481 master-0 kubenswrapper[8018]: I0217 15:07:17.785419 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/1.log" Feb 17 15:07:17.789183 master-0 kubenswrapper[8018]: I0217 15:07:17.789146 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/1.log" Feb 17 15:07:17.793444 master-0 kubenswrapper[8018]: I0217 15:07:17.793404 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/1.log" Feb 17 15:07:18.289951 master-0 kubenswrapper[8018]: I0217 15:07:18.289883 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerDied","Data":"10d84ccff2961ae0ad3f02bc199d5d344c04cfb73f881e75241a2774459f1897"} Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.289963 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.289987 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290009 8018 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290022 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290041 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290057 8018 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="89a27790-ed71-4d48-8415-a96f46bd746b" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290073 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290098 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:07:18.290254 master-0 kubenswrapper[8018]: I0217 15:07:18.290118 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:18.290889 master-0 kubenswrapper[8018]: I0217 15:07:18.290289 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:07:18.290889 master-0 kubenswrapper[8018]: I0217 15:07:18.290809 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 17 15:07:18.290889 master-0 kubenswrapper[8018]: I0217 15:07:18.290860 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:18.290889 master-0 kubenswrapper[8018]: I0217 15:07:18.290883 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd" gracePeriod=30 Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.290906 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.290921 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291042 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291058 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291075 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291088 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291104 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerDied","Data":"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86"} Feb 17 15:07:18.291198 master-0 kubenswrapper[8018]: I0217 15:07:18.291140 8018 scope.go:117] "RemoveContainer" containerID="c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291482 8018 scope.go:117] "RemoveContainer" containerID="592fd1f4489b192ac6dc0d5fe3d0dffa1e8d7c60b36c2ffccbe5d580e08d861a" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291597 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291617 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"bafb1d40abea56e15a55f39238f52822a8e7d4c344f770507c71ed614feff320"} Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291640 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291665 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291675 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651"} Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291688 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291698 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291706 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerDied","Data":"55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902"} Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291720 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9"} Feb 17 15:07:18.291720 master-0 kubenswrapper[8018]: I0217 15:07:18.291732 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291744 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerDied","Data":"bf1c4446a3533f26fa5487fb18cd78bb806fca2fbee2a1ee4a787dfdef4578a7"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291793 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerDied","Data":"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291870 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0d43de2c98bf528ec1d0c3755bf0e52b97588f5907fd26bee582cfe625d16663"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291915 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"5f1383fa29670e8399de14c8b9f6cb880364f1cbb05c5a18de5ffeee2b6f9305"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291929 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"748ddd89ff1e149998fbf333fbd90fc60ec09c72d81c0bd70bffe49c3c2956e5"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291943 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerStarted","Data":"be8f29548cec98725a9fe2f2e764da4e1fd8b3547c172ac45765b13bbbf51c52"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291954 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"592fd1f4489b192ac6dc0d5fe3d0dffa1e8d7c60b36c2ffccbe5d580e08d861a"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291963 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"340573b8d1d2fd7984cea5fe0c4a8980e05ea1fdc083142e4116628f70afce5b"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291974 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"99575ed26994aa5ecd0c47b8a6bc5878c7ca9d6e22edcdacbfec6cc81ef72b03"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291984 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"b7412e68637ba105d252df621478eb608de8c9219211183f7a22988f3e676f09"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.291993 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"03da22e3-956d-4c8a-bfd6-c1778e5d627c","Type":"ContainerDied","Data":"7d00efdad4851844a32b2b8bd4e17fbebfd887cf8eba9c8198aa34f66fbdd5b6"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292006 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d00efdad4851844a32b2b8bd4e17fbebfd887cf8eba9c8198aa34f66fbdd5b6" Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292016 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerDied","Data":"e96d7161de590628bad20a520afcf9b1363c2b5f7629d556a379b4230528784f"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292027 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerDied","Data":"43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292038 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"99addda3858d20caa2954c52d0e4203716a8b098e6c6d5e147015e80f102e5a9"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292049 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerDied","Data":"c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292060 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"304679e66f000484b85f89bc09bd351cba1f664073d85860e51117843af4fd58"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292073 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerDied","Data":"e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292098 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"0d43de2c98bf528ec1d0c3755bf0e52b97588f5907fd26bee582cfe625d16663"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292136 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerDied","Data":"3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292158 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerDied","Data":"a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292178 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292194 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292211 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerStarted","Data":"e6582b397c9a839f2d6d03076dc105158f9bf90ad6efb080207cea9f74d8064c"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292226 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292242 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292258 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292274 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292293 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"7dd053c55331a8a0d792d5a78e488f015a947989e3e1383dcd1a64fa486a01e5"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292312 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"9c473e6b1c42e4e97ed6d31b0e52ea86736af7b5464544e2ffea713e961e55df"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292327 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"cb3dbeb96630f3d5109d6c4e5a32fbf46326a5066238f4c05eb31fd67e0570ad"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292341 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"24bcd9a1fa449d31774c0b2f9747f9f7a7d21ce729de71f7dbfd671b89feec54"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292355 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"a52477200afc38c91a493a196c8111943fbf6121e870a10ff7e849d590f6609a"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292370 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerDied","Data":"748ddd89ff1e149998fbf333fbd90fc60ec09c72d81c0bd70bffe49c3c2956e5"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292391 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"592fd1f4489b192ac6dc0d5fe3d0dffa1e8d7c60b36c2ffccbe5d580e08d861a"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292409 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerDied","Data":"340573b8d1d2fd7984cea5fe0c4a8980e05ea1fdc083142e4116628f70afce5b"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292428 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerDied","Data":"99575ed26994aa5ecd0c47b8a6bc5878c7ca9d6e22edcdacbfec6cc81ef72b03"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292449 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"b7412e68637ba105d252df621478eb608de8c9219211183f7a22988f3e676f09"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292495 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerDied","Data":"5f1383fa29670e8399de14c8b9f6cb880364f1cbb05c5a18de5ffeee2b6f9305"} Feb 17 15:07:18.292562 master-0 kubenswrapper[8018]: I0217 15:07:18.292518 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf"} Feb 17 15:07:18.296588 master-0 kubenswrapper[8018]: I0217 15:07:18.293020 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 17 15:07:18.296588 master-0 kubenswrapper[8018]: I0217 15:07:18.293291 8018 scope.go:117] "RemoveContainer" containerID="340573b8d1d2fd7984cea5fe0c4a8980e05ea1fdc083142e4116628f70afce5b" Feb 17 15:07:18.296588 master-0 kubenswrapper[8018]: I0217 15:07:18.293781 8018 scope.go:117] "RemoveContainer" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" Feb 17 15:07:18.296588 master-0 kubenswrapper[8018]: E0217 15:07:18.294035 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:07:18.296588 master-0 kubenswrapper[8018]: I0217 15:07:18.296035 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podStartSLOduration=220.296018763 podStartE2EDuration="3m40.296018763s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:07:18.289813089 +0000 UTC m=+271.042156149" watchObservedRunningTime="2026-02-17 15:07:18.296018763 +0000 UTC m=+271.048361813" Feb 17 15:07:18.303708 master-0 kubenswrapper[8018]: I0217 15:07:18.303634 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 17 15:07:18.303921 master-0 kubenswrapper[8018]: I0217 15:07:18.303743 8018 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="89a27790-ed71-4d48-8415-a96f46bd746b" Feb 17 15:07:18.318695 master-0 kubenswrapper[8018]: I0217 15:07:18.316565 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:18.318695 master-0 kubenswrapper[8018]: W0217 15:07:18.318184 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd5655115_c223_42ed_a93d_9d609e55c901.slice/crio-3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71 WatchSource:0}: Error finding container 3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71: Status 404 returned error can't find the container with id 3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71 Feb 17 15:07:18.430689 master-0 kubenswrapper[8018]: I0217 15:07:18.430658 8018 scope.go:117] "RemoveContainer" containerID="d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651" Feb 17 15:07:18.590213 master-0 kubenswrapper[8018]: I0217 15:07:18.589924 8018 scope.go:117] "RemoveContainer" containerID="b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9" Feb 17 15:07:18.803278 master-0 kubenswrapper[8018]: I0217 15:07:18.774027 8018 scope.go:117] "RemoveContainer" containerID="38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" Feb 17 15:07:18.803278 master-0 kubenswrapper[8018]: E0217 15:07:18.787810 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-662mc.189510f24e3e3a58 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-662mc,UID:6cee363d-411b-42ae-8f9f-cfaac068d992,APIVersion:v1,ResourceVersion:7118,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 37.151s (37.151s including waiting). Image size: 1213306565 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:21.320989272 +0000 UTC m=+94.073332322,LastTimestamp:2026-02-17 15:04:21.320989272 +0000 UTC m=+94.073332322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:07:18.813877 master-0 kubenswrapper[8018]: I0217 15:07:18.813821 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/1.log" Feb 17 15:07:18.815123 master-0 kubenswrapper[8018]: I0217 15:07:18.815067 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103"} Feb 17 15:07:18.820959 master-0 kubenswrapper[8018]: I0217 15:07:18.820606 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:07:18.824060 master-0 kubenswrapper[8018]: I0217 15:07:18.824002 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xqt6f"] Feb 17 15:07:18.824671 master-0 kubenswrapper[8018]: I0217 15:07:18.824620 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/0.log" Feb 17 15:07:18.825601 master-0 kubenswrapper[8018]: I0217 15:07:18.825547 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerStarted","Data":"e78076928670aead1e74a90bfe18141b9748ba5b397af907cd88d6d09ee87278"} Feb 17 15:07:18.825670 master-0 kubenswrapper[8018]: I0217 15:07:18.825605 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:07:18.828576 master-0 kubenswrapper[8018]: I0217 15:07:18.828534 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/0.log" Feb 17 15:07:18.829279 master-0 kubenswrapper[8018]: I0217 15:07:18.829232 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerStarted","Data":"60c37bbe21721a193105735329bdb72d13d00d18b75bdb6198c01ec145d996cc"} Feb 17 15:07:18.830004 master-0 kubenswrapper[8018]: I0217 15:07:18.829960 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:07:18.832139 master-0 kubenswrapper[8018]: I0217 15:07:18.832092 8018 scope.go:117] "RemoveContainer" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" Feb 17 15:07:18.832818 master-0 kubenswrapper[8018]: E0217 15:07:18.832781 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521\": container with ID starting with 38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521 not found: ID does not exist" containerID="38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" Feb 17 15:07:18.834374 master-0 kubenswrapper[8018]: I0217 15:07:18.834328 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd" exitCode=2 Feb 17 15:07:18.834452 master-0 kubenswrapper[8018]: I0217 15:07:18.834419 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd"} Feb 17 15:07:18.834536 master-0 kubenswrapper[8018]: I0217 15:07:18.834457 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826"} Feb 17 15:07:18.836890 master-0 kubenswrapper[8018]: I0217 15:07:18.836862 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/1.log" Feb 17 15:07:18.836950 master-0 kubenswrapper[8018]: I0217 15:07:18.836914 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2"} Feb 17 15:07:18.843268 master-0 kubenswrapper[8018]: I0217 15:07:18.843060 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerStarted","Data":"2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608"} Feb 17 15:07:18.843778 master-0 kubenswrapper[8018]: I0217 15:07:18.843715 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:07:18.846590 master-0 kubenswrapper[8018]: I0217 15:07:18.846539 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/1.log" Feb 17 15:07:18.846714 master-0 kubenswrapper[8018]: I0217 15:07:18.846644 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1"} Feb 17 15:07:18.850588 master-0 kubenswrapper[8018]: I0217 15:07:18.850529 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/1.log" Feb 17 15:07:18.850739 master-0 kubenswrapper[8018]: I0217 15:07:18.850698 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec"} Feb 17 15:07:18.853791 master-0 kubenswrapper[8018]: I0217 15:07:18.853744 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerStarted","Data":"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8"} Feb 17 15:07:18.854089 master-0 kubenswrapper[8018]: I0217 15:07:18.854030 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:07:18.858147 master-0 kubenswrapper[8018]: I0217 15:07:18.856190 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:07:18.866539 master-0 kubenswrapper[8018]: I0217 15:07:18.866494 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:07:18.867384 master-0 kubenswrapper[8018]: I0217 15:07:18.867337 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/1.log" Feb 17 15:07:18.867666 master-0 kubenswrapper[8018]: I0217 15:07:18.867619 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4"} Feb 17 15:07:18.870191 master-0 kubenswrapper[8018]: I0217 15:07:18.870148 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/1.log" Feb 17 15:07:18.870328 master-0 kubenswrapper[8018]: I0217 15:07:18.870285 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359"} Feb 17 15:07:18.875674 master-0 kubenswrapper[8018]: I0217 15:07:18.875645 8018 scope.go:117] "RemoveContainer" containerID="e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e" Feb 17 15:07:18.875984 master-0 kubenswrapper[8018]: I0217 15:07:18.875955 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/0.log" Feb 17 15:07:18.876099 master-0 kubenswrapper[8018]: I0217 15:07:18.876054 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8"} Feb 17 15:07:18.880102 master-0 kubenswrapper[8018]: I0217 15:07:18.880076 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/1.log" Feb 17 15:07:18.880755 master-0 kubenswrapper[8018]: I0217 15:07:18.880725 8018 scope.go:117] "RemoveContainer" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" Feb 17 15:07:18.881390 master-0 kubenswrapper[8018]: E0217 15:07:18.881354 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:07:18.887660 master-0 kubenswrapper[8018]: I0217 15:07:18.887341 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d5655115-c223-42ed-a93d-9d609e55c901","Type":"ContainerStarted","Data":"a7a559907a49f4d8137e14ad794efe3aea73d7c66ce8d886c715988a380ea29f"} Feb 17 15:07:18.887660 master-0 kubenswrapper[8018]: I0217 15:07:18.887642 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d5655115-c223-42ed-a93d-9d609e55c901","Type":"ContainerStarted","Data":"3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71"} Feb 17 15:07:18.898375 master-0 kubenswrapper[8018]: I0217 15:07:18.898316 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/0.log" Feb 17 15:07:18.900210 master-0 kubenswrapper[8018]: I0217 15:07:18.900145 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a"} Feb 17 15:07:18.920132 master-0 kubenswrapper[8018]: I0217 15:07:18.920074 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:07:18.923776 master-0 kubenswrapper[8018]: I0217 15:07:18.923747 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 17 15:07:18.924295 master-0 kubenswrapper[8018]: I0217 15:07:18.924273 8018 scope.go:117] "RemoveContainer" containerID="66dd210cb26e47fd54a1792f8f197ef08337df2f55d0c4058d8d526e9bd894c8" Feb 17 15:07:18.941660 master-0 kubenswrapper[8018]: I0217 15:07:18.941597 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sft6r" podStartSLOduration=168.387584846 podStartE2EDuration="3m37.941575663s" podCreationTimestamp="2026-02-17 15:03:41 +0000 UTC" firstStartedPulling="2026-02-17 15:03:44.151132759 +0000 UTC m=+56.903475799" lastFinishedPulling="2026-02-17 15:04:33.705123556 +0000 UTC m=+106.457466616" observedRunningTime="2026-02-17 15:07:18.940317962 +0000 UTC m=+271.692661012" watchObservedRunningTime="2026-02-17 15:07:18.941575663 +0000 UTC m=+271.693918713" Feb 17 15:07:18.948895 master-0 kubenswrapper[8018]: I0217 15:07:18.948867 8018 scope.go:117] "RemoveContainer" containerID="38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521" Feb 17 15:07:18.949452 master-0 kubenswrapper[8018]: I0217 15:07:18.949405 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521"} err="failed to get container status \"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521\": rpc error: code = NotFound desc = could not find container \"38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521\": container with ID starting with 38f70927c9509fe80afa3ba3abff6d079688d5aa81d0d44ac7d674f04b1bd521 not found: ID does not exist" Feb 17 15:07:18.949596 master-0 kubenswrapper[8018]: I0217 15:07:18.949579 8018 scope.go:117] "RemoveContainer" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" Feb 17 15:07:18.950262 master-0 kubenswrapper[8018]: E0217 15:07:18.950198 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb\": container with ID starting with ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb not found: ID does not exist" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" Feb 17 15:07:18.950329 master-0 kubenswrapper[8018]: I0217 15:07:18.950257 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb"} err="failed to get container status \"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb\": rpc error: code = NotFound desc = could not find container \"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb\": container with ID starting with ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb not found: ID does not exist" Feb 17 15:07:18.950329 master-0 kubenswrapper[8018]: I0217 15:07:18.950296 8018 scope.go:117] "RemoveContainer" containerID="b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9" Feb 17 15:07:18.951371 master-0 kubenswrapper[8018]: E0217 15:07:18.951316 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9\": container with ID starting with b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9 not found: ID does not exist" containerID="b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9" Feb 17 15:07:18.951451 master-0 kubenswrapper[8018]: I0217 15:07:18.951396 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9"} err="failed to get container status \"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9\": rpc error: code = NotFound desc = could not find container \"b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9\": container with ID starting with b59bbfb9428af65d3b27dc7307524d7c342a46e0e7de78406b423b4b600990a9 not found: ID does not exist" Feb 17 15:07:18.951535 master-0 kubenswrapper[8018]: I0217 15:07:18.951449 8018 scope.go:117] "RemoveContainer" containerID="e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e" Feb 17 15:07:18.951861 master-0 kubenswrapper[8018]: E0217 15:07:18.951816 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e\": container with ID starting with e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e not found: ID does not exist" containerID="e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e" Feb 17 15:07:18.951861 master-0 kubenswrapper[8018]: I0217 15:07:18.951850 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e"} err="failed to get container status \"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e\": rpc error: code = NotFound desc = could not find container \"e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e\": container with ID starting with e25ef4d4de66b3ffd3f590bda032ee8cda9109eed6a05975ad8ed0f50306f95e not found: ID does not exist" Feb 17 15:07:18.951986 master-0 kubenswrapper[8018]: I0217 15:07:18.951871 8018 scope.go:117] "RemoveContainer" containerID="c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86" Feb 17 15:07:18.952342 master-0 kubenswrapper[8018]: E0217 15:07:18.952314 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86\": container with ID starting with c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86 not found: ID does not exist" containerID="c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86" Feb 17 15:07:18.952450 master-0 kubenswrapper[8018]: I0217 15:07:18.952425 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86"} err="failed to get container status \"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86\": rpc error: code = NotFound desc = could not find container \"c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86\": container with ID starting with c5052ce7c74d35fd56d2b65c411cf09269d730c14bf385a0a356573ac6d4ae86 not found: ID does not exist" Feb 17 15:07:18.952556 master-0 kubenswrapper[8018]: I0217 15:07:18.952542 8018 scope.go:117] "RemoveContainer" containerID="d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651" Feb 17 15:07:18.952946 master-0 kubenswrapper[8018]: E0217 15:07:18.952911 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651\": container with ID starting with d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651 not found: ID does not exist" containerID="d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651" Feb 17 15:07:18.952997 master-0 kubenswrapper[8018]: I0217 15:07:18.952947 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651"} err="failed to get container status \"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651\": rpc error: code = NotFound desc = could not find container \"d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651\": container with ID starting with d5738e21e97a228370369f51d6b435b8805640e7757385cb234f1ddd01723651 not found: ID does not exist" Feb 17 15:07:18.952997 master-0 kubenswrapper[8018]: I0217 15:07:18.952965 8018 scope.go:117] "RemoveContainer" containerID="0d43de2c98bf528ec1d0c3755bf0e52b97588f5907fd26bee582cfe625d16663" Feb 17 15:07:18.980129 master-0 kubenswrapper[8018]: I0217 15:07:18.979572 8018 scope.go:117] "RemoveContainer" containerID="ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb" Feb 17 15:07:18.980219 master-0 kubenswrapper[8018]: I0217 15:07:18.980139 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb"} err="failed to get container status \"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb\": rpc error: code = NotFound desc = could not find container \"ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb\": container with ID starting with ba7d43ee55e9cc79c713cc376fecfc7d081f9f7386af0056ca03cf50c66477bb not found: ID does not exist" Feb 17 15:07:19.013653 master-0 kubenswrapper[8018]: I0217 15:07:19.013604 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-v49tq"] Feb 17 15:07:19.018556 master-0 kubenswrapper[8018]: I0217 15:07:19.018482 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-v49tq"] Feb 17 15:07:19.053481 master-0 kubenswrapper[8018]: I0217 15:07:19.053377 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:07:19.359382 master-0 kubenswrapper[8018]: I0217 15:07:19.359269 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7x72v" podStartSLOduration=166.778243704 podStartE2EDuration="3m41.359244502s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="2026-02-17 15:03:39.091449215 +0000 UTC m=+51.843792265" lastFinishedPulling="2026-02-17 15:04:33.672449973 +0000 UTC m=+106.424793063" observedRunningTime="2026-02-17 15:07:19.353084488 +0000 UTC m=+272.105427608" watchObservedRunningTime="2026-02-17 15:07:19.359244502 +0000 UTC m=+272.111587592" Feb 17 15:07:19.395600 master-0 kubenswrapper[8018]: I0217 15:07:19.395502 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:07:19.403673 master-0 kubenswrapper[8018]: I0217 15:07:19.403621 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-662mc"] Feb 17 15:07:19.447271 master-0 kubenswrapper[8018]: I0217 15:07:19.447199 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be2df82-c77a-4d26-9498-fa3beea54b81" path="/var/lib/kubelet/pods/4be2df82-c77a-4d26-9498-fa3beea54b81/volumes" Feb 17 15:07:19.447743 master-0 kubenswrapper[8018]: I0217 15:07:19.447698 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" path="/var/lib/kubelet/pods/6cee363d-411b-42ae-8f9f-cfaac068d992/volumes" Feb 17 15:07:19.448216 master-0 kubenswrapper[8018]: I0217 15:07:19.448142 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" path="/var/lib/kubelet/pods/9f31fcfe-33ed-4e31-a12c-cb344093dcf4/volumes" Feb 17 15:07:19.449054 master-0 kubenswrapper[8018]: I0217 15:07:19.449011 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" path="/var/lib/kubelet/pods/fa4b45c7-fcd1-483b-97ae-df90a7c06f11/volumes" Feb 17 15:07:19.612992 master-0 kubenswrapper[8018]: I0217 15:07:19.612898 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=206.612868734 podStartE2EDuration="3m26.612868734s" podCreationTimestamp="2026-02-17 15:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:07:19.609291774 +0000 UTC m=+272.361634814" watchObservedRunningTime="2026-02-17 15:07:19.612868734 +0000 UTC m=+272.365211814" Feb 17 15:07:19.913487 master-0 kubenswrapper[8018]: I0217 15:07:19.913261 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/1.log" Feb 17 15:07:19.916235 master-0 kubenswrapper[8018]: I0217 15:07:19.916171 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/1.log" Feb 17 15:07:19.933222 master-0 kubenswrapper[8018]: I0217 15:07:19.933127 8018 scope.go:117] "RemoveContainer" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" Feb 17 15:07:19.933645 master-0 kubenswrapper[8018]: E0217 15:07:19.933589 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:07:20.033935 master-0 kubenswrapper[8018]: E0217 15:07:20.033833 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:07:20.707202 master-0 kubenswrapper[8018]: E0217 15:07:20.707146 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:20.708021 master-0 kubenswrapper[8018]: E0217 15:07:20.707353 8018 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:07:21.133868 master-0 kubenswrapper[8018]: I0217 15:07:21.133740 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:22.651766 master-0 kubenswrapper[8018]: I0217 15:07:22.651652 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:25.099338 master-0 kubenswrapper[8018]: I0217 15:07:25.099292 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:07:25.652089 master-0 kubenswrapper[8018]: I0217 15:07:25.651926 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:31.305841 master-0 kubenswrapper[8018]: E0217 15:07:31.305676 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:31.421045 master-0 kubenswrapper[8018]: I0217 15:07:31.420933 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:07:34.629828 master-0 kubenswrapper[8018]: E0217 15:07:34.629668 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:07:34.631138 master-0 kubenswrapper[8018]: E0217 15:07:34.629680 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:07:34.631138 master-0 kubenswrapper[8018]: E0217 15:07:34.629845 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:07:50.629802345 +0000 UTC m=+303.382145475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:07:34.631138 master-0 kubenswrapper[8018]: E0217 15:07:34.630011 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:07:50.629978069 +0000 UTC m=+303.382321159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:07:35.440379 master-0 kubenswrapper[8018]: I0217 15:07:35.440299 8018 scope.go:117] "RemoveContainer" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" Feb 17 15:07:35.652845 master-0 kubenswrapper[8018]: I0217 15:07:35.652716 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:36.021331 master-0 kubenswrapper[8018]: I0217 15:07:36.021275 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/1.log" Feb 17 15:07:36.021571 master-0 kubenswrapper[8018]: I0217 15:07:36.021339 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f"} Feb 17 15:07:36.022071 master-0 kubenswrapper[8018]: I0217 15:07:36.022028 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:07:37.022260 master-0 kubenswrapper[8018]: I0217 15:07:37.022132 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:37.023272 master-0 kubenswrapper[8018]: I0217 15:07:37.022274 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:37.035447 master-0 kubenswrapper[8018]: E0217 15:07:37.035353 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:07:38.027696 master-0 kubenswrapper[8018]: I0217 15:07:38.027576 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:38.028218 master-0 kubenswrapper[8018]: I0217 15:07:38.027705 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:40.054806 master-0 kubenswrapper[8018]: I0217 15:07:40.054645 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:40.054806 master-0 kubenswrapper[8018]: I0217 15:07:40.054777 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:40.934217 master-0 kubenswrapper[8018]: E0217 15:07:40.933905 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:07:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:07:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:07:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:07:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e90d0a6840e7f67900c763906a0628ddf209cb666c54c2dda0f4a84964a5cec\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c71d0b62dff668e0f4be49e4976deda87032ae569a87f53898bd9e5489d8a621\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\"],\\\"sizeBytes\\\":443654349},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\"],\\\"sizeBytes\\\":442871962},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\"],\\\"sizeBytes\\\":438101353}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:43.509573 master-0 kubenswrapper[8018]: E0217 15:07:43.509451 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)" pod="openshift-etcd/etcd-master-0" Feb 17 15:07:45.652538 master-0 kubenswrapper[8018]: I0217 15:07:45.652405 8018 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:45.653228 master-0 kubenswrapper[8018]: I0217 15:07:45.652555 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:45.653228 master-0 kubenswrapper[8018]: I0217 15:07:45.653153 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 17 15:07:45.653319 master-0 kubenswrapper[8018]: I0217 15:07:45.653224 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" gracePeriod=30 Feb 17 15:07:45.775840 master-0 kubenswrapper[8018]: E0217 15:07:45.775780 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:07:46.087880 master-0 kubenswrapper[8018]: I0217 15:07:46.087700 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" exitCode=2 Feb 17 15:07:46.087880 master-0 kubenswrapper[8018]: I0217 15:07:46.087796 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826"} Feb 17 15:07:46.088213 master-0 kubenswrapper[8018]: I0217 15:07:46.087927 8018 scope.go:117] "RemoveContainer" containerID="aeaf0db4df08b7760a41fe052eda610af95afb9286eacbb74c1384cac818c4dd" Feb 17 15:07:46.088869 master-0 kubenswrapper[8018]: I0217 15:07:46.088796 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:07:46.089338 master-0 kubenswrapper[8018]: E0217 15:07:46.089253 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:07:48.106644 master-0 kubenswrapper[8018]: I0217 15:07:48.106598 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/1.log" Feb 17 15:07:48.108369 master-0 kubenswrapper[8018]: I0217 15:07:48.108305 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/0.log" Feb 17 15:07:48.108784 master-0 kubenswrapper[8018]: I0217 15:07:48.108408 8018 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a" exitCode=1 Feb 17 15:07:48.108784 master-0 kubenswrapper[8018]: I0217 15:07:48.108508 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a"} Feb 17 15:07:48.108784 master-0 kubenswrapper[8018]: I0217 15:07:48.108580 8018 scope.go:117] "RemoveContainer" containerID="99addda3858d20caa2954c52d0e4203716a8b098e6c6d5e147015e80f102e5a9" Feb 17 15:07:48.109214 master-0 kubenswrapper[8018]: I0217 15:07:48.109165 8018 scope.go:117] "RemoveContainer" containerID="5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a" Feb 17 15:07:48.109619 master-0 kubenswrapper[8018]: E0217 15:07:48.109564 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:07:48.147168 master-0 kubenswrapper[8018]: I0217 15:07:48.147051 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:07:48.147410 master-0 kubenswrapper[8018]: I0217 15:07:48.147198 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:07:48.505692 master-0 kubenswrapper[8018]: I0217 15:07:48.505622 8018 scope.go:117] "RemoveContainer" containerID="b13d746fb33147c34bbdc9c278d3605b58fe9a5ed8f1e19a36f86fe284caa4b2" Feb 17 15:07:49.115432 master-0 kubenswrapper[8018]: I0217 15:07:49.115360 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/1.log" Feb 17 15:07:50.054788 master-0 kubenswrapper[8018]: I0217 15:07:50.054615 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:07:50.054788 master-0 kubenswrapper[8018]: I0217 15:07:50.054784 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:50.630188 master-0 kubenswrapper[8018]: I0217 15:07:50.630082 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:07:50.630188 master-0 kubenswrapper[8018]: I0217 15:07:50.630190 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:07:50.935329 master-0 kubenswrapper[8018]: E0217 15:07:50.935071 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:07:52.790361 master-0 kubenswrapper[8018]: E0217 15:07:52.790171 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-7x72v.189510f24e44e31d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-7x72v,UID:2ac9a5d3-569e-4434-839e-691eacbe13df,APIVersion:v1,ResourceVersion:6851,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 42.229s (42.229s including waiting). Image size: 1701476551 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:21.321425693 +0000 UTC m=+94.073768783,LastTimestamp:2026-02-17 15:04:21.321425693 +0000 UTC m=+94.073768783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:07:53.032847 master-0 kubenswrapper[8018]: I0217 15:07:53.032704 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:07:53.033873 master-0 kubenswrapper[8018]: I0217 15:07:53.033811 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:07:53.034363 master-0 kubenswrapper[8018]: E0217 15:07:53.034287 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:07:53.712168 master-0 kubenswrapper[8018]: I0217 15:07:53.712048 8018 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-pjm6n container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 17 15:07:53.712168 master-0 kubenswrapper[8018]: I0217 15:07:53.712140 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 17 15:07:54.037870 master-0 kubenswrapper[8018]: E0217 15:07:54.037645 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:07:58.145923 master-0 kubenswrapper[8018]: I0217 15:07:58.145822 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:07:58.146882 master-0 kubenswrapper[8018]: I0217 15:07:58.145922 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:08:00.055007 master-0 kubenswrapper[8018]: I0217 15:08:00.054863 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:00.055007 master-0 kubenswrapper[8018]: I0217 15:08:00.055015 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:00.440620 master-0 kubenswrapper[8018]: I0217 15:08:00.440563 8018 scope.go:117] "RemoveContainer" containerID="5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a" Feb 17 15:08:00.935799 master-0 kubenswrapper[8018]: E0217 15:08:00.935746 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:01.195420 master-0 kubenswrapper[8018]: I0217 15:08:01.195239 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/1.log" Feb 17 15:08:01.195420 master-0 kubenswrapper[8018]: I0217 15:08:01.195319 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c"} Feb 17 15:08:01.215239 master-0 kubenswrapper[8018]: E0217 15:08:01.215163 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7gwpz], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/certified-operators-2lg56" podUID="fc216ba1-144a-4cc8-93db-85ab558a166a" Feb 17 15:08:01.215535 master-0 kubenswrapper[8018]: E0217 15:08:01.215490 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zr2dv], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/community-operators-t8vtc" podUID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" Feb 17 15:08:02.202414 master-0 kubenswrapper[8018]: I0217 15:08:02.202323 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:08:02.202414 master-0 kubenswrapper[8018]: I0217 15:08:02.202384 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:08:05.440375 master-0 kubenswrapper[8018]: I0217 15:08:05.440283 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:08:05.441444 master-0 kubenswrapper[8018]: E0217 15:08:05.440709 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:08:06.237923 master-0 kubenswrapper[8018]: I0217 15:08:06.237794 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:08:06.238927 master-0 kubenswrapper[8018]: I0217 15:08:06.238854 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/1.log" Feb 17 15:08:06.239063 master-0 kubenswrapper[8018]: I0217 15:08:06.238949 8018 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" exitCode=255 Feb 17 15:08:06.239063 master-0 kubenswrapper[8018]: I0217 15:08:06.239005 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f"} Feb 17 15:08:06.239196 master-0 kubenswrapper[8018]: I0217 15:08:06.239088 8018 scope.go:117] "RemoveContainer" containerID="478ee796ae742b32516887947e3f7216f892a28bffb8fe796359a2bb89bd14cf" Feb 17 15:08:06.240346 master-0 kubenswrapper[8018]: I0217 15:08:06.240238 8018 scope.go:117] "RemoveContainer" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" Feb 17 15:08:06.241078 master-0 kubenswrapper[8018]: E0217 15:08:06.240928 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:08:07.247759 master-0 kubenswrapper[8018]: I0217 15:08:07.247684 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:08:08.146161 master-0 kubenswrapper[8018]: I0217 15:08:08.146070 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:08:08.146437 master-0 kubenswrapper[8018]: I0217 15:08:08.146196 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:08:08.146437 master-0 kubenswrapper[8018]: I0217 15:08:08.146348 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:08:08.147232 master-0 kubenswrapper[8018]: I0217 15:08:08.147172 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4"} pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 17 15:08:08.147547 master-0 kubenswrapper[8018]: I0217 15:08:08.147246 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" containerID="cri-o://2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4" gracePeriod=30 Feb 17 15:08:09.054500 master-0 kubenswrapper[8018]: I0217 15:08:09.054399 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:08:09.055443 master-0 kubenswrapper[8018]: I0217 15:08:09.055390 8018 scope.go:117] "RemoveContainer" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" Feb 17 15:08:09.055875 master-0 kubenswrapper[8018]: E0217 15:08:09.055818 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:08:09.263434 master-0 kubenswrapper[8018]: I0217 15:08:09.263348 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:08:09.264040 master-0 kubenswrapper[8018]: I0217 15:08:09.263989 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/1.log" Feb 17 15:08:09.264147 master-0 kubenswrapper[8018]: I0217 15:08:09.264051 8018 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4" exitCode=255 Feb 17 15:08:09.264147 master-0 kubenswrapper[8018]: I0217 15:08:09.264094 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4"} Feb 17 15:08:09.264147 master-0 kubenswrapper[8018]: I0217 15:08:09.264134 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0"} Feb 17 15:08:09.264335 master-0 kubenswrapper[8018]: I0217 15:08:09.264166 8018 scope.go:117] "RemoveContainer" containerID="592fd1f4489b192ac6dc0d5fe3d0dffa1e8d7c60b36c2ffccbe5d580e08d861a" Feb 17 15:08:10.273978 master-0 kubenswrapper[8018]: I0217 15:08:10.273894 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:08:10.936513 master-0 kubenswrapper[8018]: E0217 15:08:10.936361 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:11.039738 master-0 kubenswrapper[8018]: E0217 15:08:11.039636 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:08:20.440977 master-0 kubenswrapper[8018]: I0217 15:08:20.440888 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:08:20.442289 master-0 kubenswrapper[8018]: E0217 15:08:20.442223 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:08:20.937619 master-0 kubenswrapper[8018]: E0217 15:08:20.937507 8018 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:20.937619 master-0 kubenswrapper[8018]: E0217 15:08:20.937559 8018 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:08:22.440254 master-0 kubenswrapper[8018]: I0217 15:08:22.440031 8018 scope.go:117] "RemoveContainer" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" Feb 17 15:08:22.441149 master-0 kubenswrapper[8018]: E0217 15:08:22.440509 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:08:24.634985 master-0 kubenswrapper[8018]: E0217 15:08:24.634887 8018 projected.go:194] Error preparing data for projected volume kube-api-access-7gwpz for pod openshift-marketplace/certified-operators-2lg56: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:08:24.634985 master-0 kubenswrapper[8018]: E0217 15:08:24.634949 8018 projected.go:194] Error preparing data for projected volume kube-api-access-zr2dv for pod openshift-marketplace/community-operators-t8vtc: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:08:24.636164 master-0 kubenswrapper[8018]: E0217 15:08:24.635015 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz podName:fc216ba1-144a-4cc8-93db-85ab558a166a nodeName:}" failed. No retries permitted until 2026-02-17 15:08:56.63498289 +0000 UTC m=+369.387325980 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gwpz" (UniqueName: "kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz") pod "certified-operators-2lg56" (UID: "fc216ba1-144a-4cc8-93db-85ab558a166a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:08:24.636164 master-0 kubenswrapper[8018]: E0217 15:08:24.635047 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv podName:c33efa80-fbeb-438a-86e3-d22d7c12d3e9 nodeName:}" failed. No retries permitted until 2026-02-17 15:08:56.635032711 +0000 UTC m=+369.387375801 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-zr2dv" (UniqueName: "kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv") pod "community-operators-t8vtc" (UID: "c33efa80-fbeb-438a-86e3-d22d7c12d3e9") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Feb 17 15:08:26.794552 master-0 kubenswrapper[8018]: E0217 15:08:26.794295 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189510f253213ca6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:21.402975398 +0000 UTC m=+94.155318458,LastTimestamp:2026-02-17 15:04:21.402975398 +0000 UTC m=+94.155318458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:08:28.041836 master-0 kubenswrapper[8018]: E0217 15:08:28.041614 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:08:31.400728 master-0 kubenswrapper[8018]: I0217 15:08:31.400620 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/2.log" Feb 17 15:08:31.401638 master-0 kubenswrapper[8018]: I0217 15:08:31.401541 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/1.log" Feb 17 15:08:31.401638 master-0 kubenswrapper[8018]: I0217 15:08:31.401616 8018 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" exitCode=1 Feb 17 15:08:31.401870 master-0 kubenswrapper[8018]: I0217 15:08:31.401662 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c"} Feb 17 15:08:31.401870 master-0 kubenswrapper[8018]: I0217 15:08:31.401779 8018 scope.go:117] "RemoveContainer" containerID="5c926a31e5a499cf89b540e143bdea5a4e85fe7e25d5738e5efec2253bdaaf8a" Feb 17 15:08:31.402581 master-0 kubenswrapper[8018]: I0217 15:08:31.402534 8018 scope.go:117] "RemoveContainer" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" Feb 17 15:08:31.402939 master-0 kubenswrapper[8018]: E0217 15:08:31.402882 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:08:32.409944 master-0 kubenswrapper[8018]: I0217 15:08:32.409835 8018 generic.go:334] "Generic (PLEG): container finished" podID="801742a6-3735-4883-9676-e852dc4173d2" containerID="acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60" exitCode=0 Feb 17 15:08:32.409944 master-0 kubenswrapper[8018]: I0217 15:08:32.409920 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerDied","Data":"acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60"} Feb 17 15:08:32.410948 master-0 kubenswrapper[8018]: I0217 15:08:32.410406 8018 scope.go:117] "RemoveContainer" containerID="acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60" Feb 17 15:08:32.413305 master-0 kubenswrapper[8018]: I0217 15:08:32.412606 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/2.log" Feb 17 15:08:33.422514 master-0 kubenswrapper[8018]: I0217 15:08:33.422415 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerStarted","Data":"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e"} Feb 17 15:08:33.441005 master-0 kubenswrapper[8018]: I0217 15:08:33.440926 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:08:33.441350 master-0 kubenswrapper[8018]: E0217 15:08:33.441252 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 17 15:08:33.792360 master-0 kubenswrapper[8018]: I0217 15:08:33.792207 8018 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 17 15:08:33.794572 master-0 kubenswrapper[8018]: I0217 15:08:33.794525 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:08:33.795073 master-0 kubenswrapper[8018]: E0217 15:08:33.795046 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:08:33.795224 master-0 kubenswrapper[8018]: I0217 15:08:33.795204 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:08:33.795355 master-0 kubenswrapper[8018]: E0217 15:08:33.795335 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" containerName="installer" Feb 17 15:08:33.795510 master-0 kubenswrapper[8018]: I0217 15:08:33.795452 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" containerName="installer" Feb 17 15:08:33.795672 master-0 kubenswrapper[8018]: E0217 15:08:33.795650 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.795791 master-0 kubenswrapper[8018]: I0217 15:08:33.795772 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.795909 master-0 kubenswrapper[8018]: E0217 15:08:33.795890 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.796043 master-0 kubenswrapper[8018]: I0217 15:08:33.796023 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.796160 master-0 kubenswrapper[8018]: E0217 15:08:33.796141 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be2df82-c77a-4d26-9498-fa3beea54b81" containerName="cluster-version-operator" Feb 17 15:08:33.796274 master-0 kubenswrapper[8018]: I0217 15:08:33.796256 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be2df82-c77a-4d26-9498-fa3beea54b81" containerName="cluster-version-operator" Feb 17 15:08:33.796399 master-0 kubenswrapper[8018]: E0217 15:08:33.796378 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.796570 master-0 kubenswrapper[8018]: I0217 15:08:33.796546 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.796711 master-0 kubenswrapper[8018]: E0217 15:08:33.796691 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerName="extract-content" Feb 17 15:08:33.796825 master-0 kubenswrapper[8018]: I0217 15:08:33.796806 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerName="extract-content" Feb 17 15:08:33.796940 master-0 kubenswrapper[8018]: E0217 15:08:33.796920 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:08:33.797053 master-0 kubenswrapper[8018]: I0217 15:08:33.797035 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:08:33.797181 master-0 kubenswrapper[8018]: E0217 15:08:33.797162 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.797295 master-0 kubenswrapper[8018]: I0217 15:08:33.797277 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.797412 master-0 kubenswrapper[8018]: E0217 15:08:33.797393 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:08:33.797594 master-0 kubenswrapper[8018]: I0217 15:08:33.797570 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:08:33.797749 master-0 kubenswrapper[8018]: E0217 15:08:33.797729 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerName="extract-content" Feb 17 15:08:33.797882 master-0 kubenswrapper[8018]: I0217 15:08:33.797857 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerName="extract-content" Feb 17 15:08:33.798027 master-0 kubenswrapper[8018]: E0217 15:08:33.798004 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerName="extract-utilities" Feb 17 15:08:33.798182 master-0 kubenswrapper[8018]: I0217 15:08:33.798160 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerName="extract-utilities" Feb 17 15:08:33.798316 master-0 kubenswrapper[8018]: E0217 15:08:33.798297 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 17 15:08:33.798431 master-0 kubenswrapper[8018]: I0217 15:08:33.798412 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 17 15:08:33.798622 master-0 kubenswrapper[8018]: E0217 15:08:33.798598 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerName="extract-utilities" Feb 17 15:08:33.798750 master-0 kubenswrapper[8018]: I0217 15:08:33.798729 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerName="extract-utilities" Feb 17 15:08:33.799060 master-0 kubenswrapper[8018]: I0217 15:08:33.799026 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.799217 master-0 kubenswrapper[8018]: I0217 15:08:33.799196 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 17 15:08:33.799333 master-0 kubenswrapper[8018]: I0217 15:08:33.799314 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.799506 master-0 kubenswrapper[8018]: I0217 15:08:33.799438 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f31fcfe-33ed-4e31-a12c-cb344093dcf4" containerName="installer" Feb 17 15:08:33.799697 master-0 kubenswrapper[8018]: I0217 15:08:33.799671 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.799810 master-0 kubenswrapper[8018]: I0217 15:08:33.799791 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.799932 master-0 kubenswrapper[8018]: I0217 15:08:33.799909 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.800051 master-0 kubenswrapper[8018]: I0217 15:08:33.800032 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:08:33.800160 master-0 kubenswrapper[8018]: I0217 15:08:33.800142 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:08:33.800275 master-0 kubenswrapper[8018]: I0217 15:08:33.800256 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be2df82-c77a-4d26-9498-fa3beea54b81" containerName="cluster-version-operator" Feb 17 15:08:33.800392 master-0 kubenswrapper[8018]: I0217 15:08:33.800374 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4b45c7-fcd1-483b-97ae-df90a7c06f11" containerName="extract-content" Feb 17 15:08:33.800543 master-0 kubenswrapper[8018]: I0217 15:08:33.800523 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cee363d-411b-42ae-8f9f-cfaac068d992" containerName="extract-content" Feb 17 15:08:33.800656 master-0 kubenswrapper[8018]: I0217 15:08:33.800637 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:08:33.800926 master-0 kubenswrapper[8018]: E0217 15:08:33.800903 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.801048 master-0 kubenswrapper[8018]: I0217 15:08:33.801029 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.801366 master-0 kubenswrapper[8018]: I0217 15:08:33.801343 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.801656 master-0 kubenswrapper[8018]: E0217 15:08:33.801632 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.801797 master-0 kubenswrapper[8018]: I0217 15:08:33.801777 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 17 15:08:33.802995 master-0 kubenswrapper[8018]: I0217 15:08:33.802963 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:33.918726 master-0 kubenswrapper[8018]: I0217 15:08:33.918569 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:33.918726 master-0 kubenswrapper[8018]: I0217 15:08:33.918683 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:34.020527 master-0 kubenswrapper[8018]: I0217 15:08:34.020404 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:34.020527 master-0 kubenswrapper[8018]: I0217 15:08:34.020538 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:34.020930 master-0 kubenswrapper[8018]: I0217 15:08:34.020755 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:34.021223 master-0 kubenswrapper[8018]: I0217 15:08:34.021159 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:34.449044 master-0 kubenswrapper[8018]: I0217 15:08:34.448930 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://8e4f485693ac9a91f7bc7a84cdde902f639454acfd53f8608408575f632d2ecf" gracePeriod=30 Feb 17 15:08:35.440375 master-0 kubenswrapper[8018]: I0217 15:08:35.440248 8018 scope.go:117] "RemoveContainer" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" Feb 17 15:08:36.174398 master-0 kubenswrapper[8018]: I0217 15:08:36.174261 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:36.179051 master-0 kubenswrapper[8018]: I0217 15:08:36.178984 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:08:36.193190 master-0 kubenswrapper[8018]: W0217 15:08:36.192991 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27fd92ef556705625a2e4f1011322252.slice/crio-bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02 WatchSource:0}: Error finding container bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02: Status 404 returned error can't find the container with id bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02 Feb 17 15:08:36.465993 master-0 kubenswrapper[8018]: I0217 15:08:36.465887 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02"} Feb 17 15:08:36.468730 master-0 kubenswrapper[8018]: I0217 15:08:36.468668 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:08:36.468814 master-0 kubenswrapper[8018]: I0217 15:08:36.468780 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4"} Feb 17 15:08:36.469193 master-0 kubenswrapper[8018]: I0217 15:08:36.469145 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:08:37.469957 master-0 kubenswrapper[8018]: I0217 15:08:37.469833 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:37.469957 master-0 kubenswrapper[8018]: I0217 15:08:37.469941 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:37.492350 master-0 kubenswrapper[8018]: I0217 15:08:37.492235 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd"} Feb 17 15:08:37.492634 master-0 kubenswrapper[8018]: I0217 15:08:37.492368 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"35fe638f6458381f305a5bf70c5f72c08dfe6647c1374e528fdd2425345b92ec"} Feb 17 15:08:37.492634 master-0 kubenswrapper[8018]: I0217 15:08:37.492407 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c"} Feb 17 15:08:37.492634 master-0 kubenswrapper[8018]: I0217 15:08:37.492439 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} Feb 17 15:08:38.492969 master-0 kubenswrapper[8018]: I0217 15:08:38.492921 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:38.493652 master-0 kubenswrapper[8018]: I0217 15:08:38.493607 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:38.501593 master-0 kubenswrapper[8018]: I0217 15:08:38.501500 8018 generic.go:334] "Generic (PLEG): container finished" podID="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" containerID="0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5" exitCode=0 Feb 17 15:08:38.501753 master-0 kubenswrapper[8018]: I0217 15:08:38.501518 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerDied","Data":"0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5"} Feb 17 15:08:38.502794 master-0 kubenswrapper[8018]: I0217 15:08:38.502726 8018 scope.go:117] "RemoveContainer" containerID="0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5" Feb 17 15:08:38.527416 master-0 kubenswrapper[8018]: I0217 15:08:38.527314 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.527275899 podStartE2EDuration="2.527275899s" podCreationTimestamp="2026-02-17 15:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:08:37.531136177 +0000 UTC m=+350.283479267" watchObservedRunningTime="2026-02-17 15:08:38.527275899 +0000 UTC m=+351.279618989" Feb 17 15:08:39.145678 master-0 kubenswrapper[8018]: I0217 15:08:39.145619 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:39.146014 master-0 kubenswrapper[8018]: I0217 15:08:39.145982 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:39.510239 master-0 kubenswrapper[8018]: I0217 15:08:39.510119 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerStarted","Data":"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3"} Feb 17 15:08:40.055330 master-0 kubenswrapper[8018]: I0217 15:08:40.055252 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:40.055709 master-0 kubenswrapper[8018]: I0217 15:08:40.055356 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:43.441158 master-0 kubenswrapper[8018]: I0217 15:08:43.441041 8018 scope.go:117] "RemoveContainer" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" Feb 17 15:08:43.442725 master-0 kubenswrapper[8018]: E0217 15:08:43.441368 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:08:45.043543 master-0 kubenswrapper[8018]: E0217 15:08:45.043348 8018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:08:45.553616 master-0 kubenswrapper[8018]: I0217 15:08:45.553525 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/0.log" Feb 17 15:08:45.554877 master-0 kubenswrapper[8018]: I0217 15:08:45.554818 8018 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52" exitCode=0 Feb 17 15:08:45.554999 master-0 kubenswrapper[8018]: I0217 15:08:45.554880 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52"} Feb 17 15:08:45.554999 master-0 kubenswrapper[8018]: I0217 15:08:45.554974 8018 scope.go:117] "RemoveContainer" containerID="ab1f920a647980800ae08efae1274805a32af351c37c8743a9d7313eb1fca48b" Feb 17 15:08:45.555768 master-0 kubenswrapper[8018]: I0217 15:08:45.555706 8018 scope.go:117] "RemoveContainer" containerID="db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52" Feb 17 15:08:45.556111 master-0 kubenswrapper[8018]: E0217 15:08:45.556042 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-55b69c6c48-mzk89_openshift-cluster-olm-operator(6c734c89-515e-4ff0-82d1-831ddaf0b99e)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" podUID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" Feb 17 15:08:46.175118 master-0 kubenswrapper[8018]: I0217 15:08:46.174992 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:46.175118 master-0 kubenswrapper[8018]: I0217 15:08:46.175095 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:46.175118 master-0 kubenswrapper[8018]: I0217 15:08:46.175116 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:46.175118 master-0 kubenswrapper[8018]: I0217 15:08:46.175136 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:46.183652 master-0 kubenswrapper[8018]: I0217 15:08:46.183573 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:46.572701 master-0 kubenswrapper[8018]: I0217 15:08:46.572432 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:48.584193 master-0 kubenswrapper[8018]: I0217 15:08:48.584090 8018 generic.go:334] "Generic (PLEG): container finished" podID="d5655115-c223-42ed-a93d-9d609e55c901" containerID="a7a559907a49f4d8137e14ad794efe3aea73d7c66ce8d886c715988a380ea29f" exitCode=0 Feb 17 15:08:48.584976 master-0 kubenswrapper[8018]: I0217 15:08:48.584216 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d5655115-c223-42ed-a93d-9d609e55c901","Type":"ContainerDied","Data":"a7a559907a49f4d8137e14ad794efe3aea73d7c66ce8d886c715988a380ea29f"} Feb 17 15:08:48.587001 master-0 kubenswrapper[8018]: I0217 15:08:48.586944 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:08:48.587822 master-0 kubenswrapper[8018]: I0217 15:08:48.587763 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/1.log" Feb 17 15:08:48.587822 master-0 kubenswrapper[8018]: I0217 15:08:48.587812 8018 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" exitCode=255 Feb 17 15:08:48.588092 master-0 kubenswrapper[8018]: I0217 15:08:48.587854 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32"} Feb 17 15:08:48.588092 master-0 kubenswrapper[8018]: I0217 15:08:48.587928 8018 scope.go:117] "RemoveContainer" containerID="304679e66f000484b85f89bc09bd351cba1f664073d85860e51117843af4fd58" Feb 17 15:08:48.588539 master-0 kubenswrapper[8018]: I0217 15:08:48.588448 8018 scope.go:117] "RemoveContainer" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" Feb 17 15:08:48.588697 master-0 kubenswrapper[8018]: E0217 15:08:48.588663 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" podUID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" Feb 17 15:08:48.590898 master-0 kubenswrapper[8018]: I0217 15:08:48.589820 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:08:48.591525 master-0 kubenswrapper[8018]: I0217 15:08:48.591436 8018 generic.go:334] "Generic (PLEG): container finished" podID="2b167b7b-2280-4c82-ac78-71c57aebe503" containerID="477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f" exitCode=255 Feb 17 15:08:48.591710 master-0 kubenswrapper[8018]: I0217 15:08:48.591528 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerDied","Data":"477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f"} Feb 17 15:08:48.592075 master-0 kubenswrapper[8018]: I0217 15:08:48.592026 8018 scope.go:117] "RemoveContainer" containerID="477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f" Feb 17 15:08:48.592276 master-0 kubenswrapper[8018]: E0217 15:08:48.592228 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-7485d55966-wcpf8_openshift-kube-scheduler-operator(2b167b7b-2280-4c82-ac78-71c57aebe503)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" podUID="2b167b7b-2280-4c82-ac78-71c57aebe503" Feb 17 15:08:48.629067 master-0 kubenswrapper[8018]: I0217 15:08:48.629027 8018 scope.go:117] "RemoveContainer" containerID="4c453c258107dc05c66b4fe7dfb751fa16a6ada9afb337ed9bd51bf0bf1e157f" Feb 17 15:08:49.145668 master-0 kubenswrapper[8018]: I0217 15:08:49.145475 8018 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:49.145668 master-0 kubenswrapper[8018]: I0217 15:08:49.145588 8018 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:49.175851 master-0 kubenswrapper[8018]: I0217 15:08:49.175633 8018 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:49.176020 master-0 kubenswrapper[8018]: I0217 15:08:49.175847 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:49.601271 master-0 kubenswrapper[8018]: I0217 15:08:49.601086 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:08:49.603780 master-0 kubenswrapper[8018]: I0217 15:08:49.603727 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:08:49.604490 master-0 kubenswrapper[8018]: I0217 15:08:49.604385 8018 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c" exitCode=255 Feb 17 15:08:49.604647 master-0 kubenswrapper[8018]: I0217 15:08:49.604529 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerDied","Data":"398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c"} Feb 17 15:08:49.604759 master-0 kubenswrapper[8018]: I0217 15:08:49.604645 8018 scope.go:117] "RemoveContainer" containerID="bf1c4446a3533f26fa5487fb18cd78bb806fca2fbee2a1ee4a787dfdef4578a7" Feb 17 15:08:49.605066 master-0 kubenswrapper[8018]: I0217 15:08:49.605021 8018 scope.go:117] "RemoveContainer" containerID="398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c" Feb 17 15:08:49.605232 master-0 kubenswrapper[8018]: E0217 15:08:49.605204 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-5f5g9_openshift-apiserver-operator(af61bda0-c7b4-489d-a671-eaa5299942fe)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" podUID="af61bda0-c7b4-489d-a671-eaa5299942fe" Feb 17 15:08:49.607561 master-0 kubenswrapper[8018]: I0217 15:08:49.606866 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:08:49.608285 master-0 kubenswrapper[8018]: I0217 15:08:49.608228 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/1.log" Feb 17 15:08:49.608375 master-0 kubenswrapper[8018]: I0217 15:08:49.608313 8018 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" exitCode=255 Feb 17 15:08:49.608503 master-0 kubenswrapper[8018]: I0217 15:08:49.608423 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerDied","Data":"13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103"} Feb 17 15:08:49.609222 master-0 kubenswrapper[8018]: I0217 15:08:49.609169 8018 scope.go:117] "RemoveContainer" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" Feb 17 15:08:49.609605 master-0 kubenswrapper[8018]: E0217 15:08:49.609552 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-xvzq9_openshift-kube-controller-manager-operator(553d4535-9985-47e2-83ee-8fcfb6035e7b)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" podUID="553d4535-9985-47e2-83ee-8fcfb6035e7b" Feb 17 15:08:49.611769 master-0 kubenswrapper[8018]: I0217 15:08:49.611725 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:08:49.612602 master-0 kubenswrapper[8018]: I0217 15:08:49.612548 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/1.log" Feb 17 15:08:49.612770 master-0 kubenswrapper[8018]: I0217 15:08:49.612634 8018 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" exitCode=255 Feb 17 15:08:49.612770 master-0 kubenswrapper[8018]: I0217 15:08:49.612741 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerDied","Data":"47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359"} Feb 17 15:08:49.613393 master-0 kubenswrapper[8018]: I0217 15:08:49.613331 8018 scope.go:117] "RemoveContainer" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" Feb 17 15:08:49.613699 master-0 kubenswrapper[8018]: E0217 15:08:49.613639 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-67bf55ccdd-pjm6n_openshift-etcd-operator(f2546ffc-8d0a-4010-a3bd-9e69b6dbea40)\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" Feb 17 15:08:49.614617 master-0 kubenswrapper[8018]: I0217 15:08:49.614566 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:08:49.615302 master-0 kubenswrapper[8018]: I0217 15:08:49.615245 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/1.log" Feb 17 15:08:49.615414 master-0 kubenswrapper[8018]: I0217 15:08:49.615310 8018 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" exitCode=255 Feb 17 15:08:49.615414 master-0 kubenswrapper[8018]: I0217 15:08:49.615362 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerDied","Data":"c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1"} Feb 17 15:08:49.616202 master-0 kubenswrapper[8018]: I0217 15:08:49.616134 8018 scope.go:117] "RemoveContainer" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" Feb 17 15:08:49.616609 master-0 kubenswrapper[8018]: E0217 15:08:49.616532 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-p5mdv_openshift-kube-apiserver-operator(e259b5a1-837b-4cde-85f7-cd5781af08bd)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" podUID="e259b5a1-837b-4cde-85f7-cd5781af08bd" Feb 17 15:08:49.617157 master-0 kubenswrapper[8018]: I0217 15:08:49.617098 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:08:49.619085 master-0 kubenswrapper[8018]: I0217 15:08:49.619016 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/1.log" Feb 17 15:08:49.620078 master-0 kubenswrapper[8018]: I0217 15:08:49.620021 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/0.log" Feb 17 15:08:49.620206 master-0 kubenswrapper[8018]: I0217 15:08:49.620114 8018 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" containerID="6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4" exitCode=255 Feb 17 15:08:49.620281 master-0 kubenswrapper[8018]: I0217 15:08:49.620241 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerDied","Data":"6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4"} Feb 17 15:08:49.620983 master-0 kubenswrapper[8018]: I0217 15:08:49.620912 8018 scope.go:117] "RemoveContainer" containerID="6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4" Feb 17 15:08:49.621388 master-0 kubenswrapper[8018]: E0217 15:08:49.621318 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-6fcf4c966-l24cg_openshift-network-operator(4fd2c79d-1e10-4f09-8a33-c66598abc99a)\"" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" podUID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" Feb 17 15:08:49.622211 master-0 kubenswrapper[8018]: I0217 15:08:49.622153 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:08:49.622797 master-0 kubenswrapper[8018]: I0217 15:08:49.622742 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/1.log" Feb 17 15:08:49.622902 master-0 kubenswrapper[8018]: I0217 15:08:49.622817 8018 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" exitCode=255 Feb 17 15:08:49.622902 master-0 kubenswrapper[8018]: I0217 15:08:49.622867 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerDied","Data":"f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2"} Feb 17 15:08:49.623609 master-0 kubenswrapper[8018]: I0217 15:08:49.623516 8018 scope.go:117] "RemoveContainer" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" Feb 17 15:08:49.623962 master-0 kubenswrapper[8018]: E0217 15:08:49.623890 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-tckph_openshift-kube-storage-version-migrator-operator(0c58265d-32fb-4cf0-97d8-6c9a5d37fad9)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" podUID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" Feb 17 15:08:49.625203 master-0 kubenswrapper[8018]: I0217 15:08:49.625154 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/2.log" Feb 17 15:08:49.625874 master-0 kubenswrapper[8018]: I0217 15:08:49.625824 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/1.log" Feb 17 15:08:49.625943 master-0 kubenswrapper[8018]: I0217 15:08:49.625919 8018 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" exitCode=255 Feb 17 15:08:49.626084 master-0 kubenswrapper[8018]: I0217 15:08:49.626039 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec"} Feb 17 15:08:49.626871 master-0 kubenswrapper[8018]: I0217 15:08:49.626838 8018 scope.go:117] "RemoveContainer" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" Feb 17 15:08:49.627173 master-0 kubenswrapper[8018]: E0217 15:08:49.627139 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" podUID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" Feb 17 15:08:49.648026 master-0 kubenswrapper[8018]: I0217 15:08:49.647925 8018 scope.go:117] "RemoveContainer" containerID="340573b8d1d2fd7984cea5fe0c4a8980e05ea1fdc083142e4116628f70afce5b" Feb 17 15:08:49.698534 master-0 kubenswrapper[8018]: I0217 15:08:49.698452 8018 scope.go:117] "RemoveContainer" containerID="5f1383fa29670e8399de14c8b9f6cb880364f1cbb05c5a18de5ffeee2b6f9305" Feb 17 15:08:49.732284 master-0 kubenswrapper[8018]: I0217 15:08:49.732203 8018 scope.go:117] "RemoveContainer" containerID="748ddd89ff1e149998fbf333fbd90fc60ec09c72d81c0bd70bffe49c3c2956e5" Feb 17 15:08:49.771531 master-0 kubenswrapper[8018]: I0217 15:08:49.769711 8018 scope.go:117] "RemoveContainer" containerID="10d84ccff2961ae0ad3f02bc199d5d344c04cfb73f881e75241a2774459f1897" Feb 17 15:08:49.875343 master-0 kubenswrapper[8018]: I0217 15:08:49.875233 8018 scope.go:117] "RemoveContainer" containerID="99575ed26994aa5ecd0c47b8a6bc5878c7ca9d6e22edcdacbfec6cc81ef72b03" Feb 17 15:08:49.917270 master-0 kubenswrapper[8018]: I0217 15:08:49.917221 8018 scope.go:117] "RemoveContainer" containerID="b7412e68637ba105d252df621478eb608de8c9219211183f7a22988f3e676f09" Feb 17 15:08:49.958008 master-0 kubenswrapper[8018]: I0217 15:08:49.957951 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:08:50.054344 master-0 kubenswrapper[8018]: I0217 15:08:50.054203 8018 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:08:50.054344 master-0 kubenswrapper[8018]: I0217 15:08:50.054332 8018 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:08:50.077008 master-0 kubenswrapper[8018]: I0217 15:08:50.076956 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock\") pod \"d5655115-c223-42ed-a93d-9d609e55c901\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " Feb 17 15:08:50.077117 master-0 kubenswrapper[8018]: I0217 15:08:50.077066 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir\") pod \"d5655115-c223-42ed-a93d-9d609e55c901\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " Feb 17 15:08:50.077155 master-0 kubenswrapper[8018]: I0217 15:08:50.077135 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access\") pod \"d5655115-c223-42ed-a93d-9d609e55c901\" (UID: \"d5655115-c223-42ed-a93d-9d609e55c901\") " Feb 17 15:08:50.077299 master-0 kubenswrapper[8018]: I0217 15:08:50.077251 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5655115-c223-42ed-a93d-9d609e55c901" (UID: "d5655115-c223-42ed-a93d-9d609e55c901"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:50.077341 master-0 kubenswrapper[8018]: I0217 15:08:50.077251 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock" (OuterVolumeSpecName: "var-lock") pod "d5655115-c223-42ed-a93d-9d609e55c901" (UID: "d5655115-c223-42ed-a93d-9d609e55c901"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:50.077521 master-0 kubenswrapper[8018]: I0217 15:08:50.077499 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:50.077587 master-0 kubenswrapper[8018]: I0217 15:08:50.077520 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5655115-c223-42ed-a93d-9d609e55c901-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:50.080374 master-0 kubenswrapper[8018]: I0217 15:08:50.080321 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5655115-c223-42ed-a93d-9d609e55c901" (UID: "d5655115-c223-42ed-a93d-9d609e55c901"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:08:50.178649 master-0 kubenswrapper[8018]: I0217 15:08:50.178566 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5655115-c223-42ed-a93d-9d609e55c901-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:50.633448 master-0 kubenswrapper[8018]: I0217 15:08:50.633389 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:08:50.635638 master-0 kubenswrapper[8018]: I0217 15:08:50.635583 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:08:50.636098 master-0 kubenswrapper[8018]: I0217 15:08:50.635692 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"d5655115-c223-42ed-a93d-9d609e55c901","Type":"ContainerDied","Data":"3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71"} Feb 17 15:08:50.636098 master-0 kubenswrapper[8018]: I0217 15:08:50.636096 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71" Feb 17 15:08:50.637351 master-0 kubenswrapper[8018]: I0217 15:08:50.637238 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:08:50.639163 master-0 kubenswrapper[8018]: I0217 15:08:50.639128 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/1.log" Feb 17 15:08:50.640518 master-0 kubenswrapper[8018]: I0217 15:08:50.640482 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:08:50.642087 master-0 kubenswrapper[8018]: I0217 15:08:50.642051 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/2.log" Feb 17 15:08:50.644170 master-0 kubenswrapper[8018]: I0217 15:08:50.644131 8018 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" exitCode=0 Feb 17 15:08:50.644264 master-0 kubenswrapper[8018]: I0217 15:08:50.644163 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8"} Feb 17 15:08:50.644264 master-0 kubenswrapper[8018]: I0217 15:08:50.644206 8018 scope.go:117] "RemoveContainer" containerID="49fb045b32e2f71ec7c2565d556ca4beff6373bd7b27c95db6da3102666e0048" Feb 17 15:08:50.644739 master-0 kubenswrapper[8018]: I0217 15:08:50.644696 8018 scope.go:117] "RemoveContainer" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" Feb 17 15:08:50.645063 master-0 kubenswrapper[8018]: E0217 15:08:50.645003 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:08:50.645809 master-0 kubenswrapper[8018]: I0217 15:08:50.645664 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:08:50.647711 master-0 kubenswrapper[8018]: I0217 15:08:50.647663 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:08:50.797266 master-0 kubenswrapper[8018]: I0217 15:08:50.797158 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:08:50.797266 master-0 kubenswrapper[8018]: I0217 15:08:50.797273 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:08:51.656685 master-0 kubenswrapper[8018]: I0217 15:08:51.656636 8018 scope.go:117] "RemoveContainer" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" Feb 17 15:08:51.657508 master-0 kubenswrapper[8018]: E0217 15:08:51.656836 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:08:53.347532 master-0 kubenswrapper[8018]: I0217 15:08:53.347432 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:08:53.348397 master-0 kubenswrapper[8018]: I0217 15:08:53.347680 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7x72v" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="registry-server" containerID="cri-o://7397d4596fe2a2dae9588ce30d943b39077360c93f90cf8337de17c411fc2457" gracePeriod=2 Feb 17 15:08:53.542180 master-0 kubenswrapper[8018]: I0217 15:08:53.542105 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:08:53.542893 master-0 kubenswrapper[8018]: I0217 15:08:53.542439 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sft6r" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="registry-server" containerID="cri-o://a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519" gracePeriod=2 Feb 17 15:08:53.670873 master-0 kubenswrapper[8018]: I0217 15:08:53.670717 8018 generic.go:334] "Generic (PLEG): container finished" podID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerID="7397d4596fe2a2dae9588ce30d943b39077360c93f90cf8337de17c411fc2457" exitCode=0 Feb 17 15:08:53.670873 master-0 kubenswrapper[8018]: I0217 15:08:53.670789 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerDied","Data":"7397d4596fe2a2dae9588ce30d943b39077360c93f90cf8337de17c411fc2457"} Feb 17 15:08:53.710943 master-0 kubenswrapper[8018]: I0217 15:08:53.710876 8018 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:08:53.711519 master-0 kubenswrapper[8018]: I0217 15:08:53.711443 8018 scope.go:117] "RemoveContainer" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" Feb 17 15:08:53.711811 master-0 kubenswrapper[8018]: E0217 15:08:53.711760 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-67bf55ccdd-pjm6n_openshift-etcd-operator(f2546ffc-8d0a-4010-a3bd-9e69b6dbea40)\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" Feb 17 15:08:53.760695 master-0 kubenswrapper[8018]: I0217 15:08:53.760636 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:08:53.832856 master-0 kubenswrapper[8018]: I0217 15:08:53.832747 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzwn\" (UniqueName: \"kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn\") pod \"2ac9a5d3-569e-4434-839e-691eacbe13df\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " Feb 17 15:08:53.832856 master-0 kubenswrapper[8018]: I0217 15:08:53.832843 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities\") pod \"2ac9a5d3-569e-4434-839e-691eacbe13df\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " Feb 17 15:08:53.832856 master-0 kubenswrapper[8018]: I0217 15:08:53.832875 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content\") pod \"2ac9a5d3-569e-4434-839e-691eacbe13df\" (UID: \"2ac9a5d3-569e-4434-839e-691eacbe13df\") " Feb 17 15:08:53.834042 master-0 kubenswrapper[8018]: I0217 15:08:53.833891 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities" (OuterVolumeSpecName: "utilities") pod "2ac9a5d3-569e-4434-839e-691eacbe13df" (UID: "2ac9a5d3-569e-4434-839e-691eacbe13df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:08:53.836699 master-0 kubenswrapper[8018]: I0217 15:08:53.836652 8018 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-utilities\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:53.837280 master-0 kubenswrapper[8018]: I0217 15:08:53.837222 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn" (OuterVolumeSpecName: "kube-api-access-nzzwn") pod "2ac9a5d3-569e-4434-839e-691eacbe13df" (UID: "2ac9a5d3-569e-4434-839e-691eacbe13df"). InnerVolumeSpecName "kube-api-access-nzzwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:08:53.937648 master-0 kubenswrapper[8018]: I0217 15:08:53.937491 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzwn\" (UniqueName: \"kubernetes.io/projected/2ac9a5d3-569e-4434-839e-691eacbe13df-kube-api-access-nzzwn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:53.965510 master-0 kubenswrapper[8018]: I0217 15:08:53.965437 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:08:54.029583 master-0 kubenswrapper[8018]: I0217 15:08:54.029509 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ac9a5d3-569e-4434-839e-691eacbe13df" (UID: "2ac9a5d3-569e-4434-839e-691eacbe13df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:08:54.038625 master-0 kubenswrapper[8018]: I0217 15:08:54.038586 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xz5w\" (UniqueName: \"kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w\") pod \"e2994de0-1535-423a-90ce-019043cd4b9d\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " Feb 17 15:08:54.038693 master-0 kubenswrapper[8018]: I0217 15:08:54.038678 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities\") pod \"e2994de0-1535-423a-90ce-019043cd4b9d\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " Feb 17 15:08:54.038798 master-0 kubenswrapper[8018]: I0217 15:08:54.038780 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content\") pod \"e2994de0-1535-423a-90ce-019043cd4b9d\" (UID: \"e2994de0-1535-423a-90ce-019043cd4b9d\") " Feb 17 15:08:54.039049 master-0 kubenswrapper[8018]: I0217 15:08:54.039028 8018 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac9a5d3-569e-4434-839e-691eacbe13df-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:54.039791 master-0 kubenswrapper[8018]: I0217 15:08:54.039732 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities" (OuterVolumeSpecName: "utilities") pod "e2994de0-1535-423a-90ce-019043cd4b9d" (UID: "e2994de0-1535-423a-90ce-019043cd4b9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:08:54.042483 master-0 kubenswrapper[8018]: I0217 15:08:54.042429 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w" (OuterVolumeSpecName: "kube-api-access-4xz5w") pod "e2994de0-1535-423a-90ce-019043cd4b9d" (UID: "e2994de0-1535-423a-90ce-019043cd4b9d"). InnerVolumeSpecName "kube-api-access-4xz5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:08:54.072151 master-0 kubenswrapper[8018]: I0217 15:08:54.072055 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2994de0-1535-423a-90ce-019043cd4b9d" (UID: "e2994de0-1535-423a-90ce-019043cd4b9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:08:54.140068 master-0 kubenswrapper[8018]: I0217 15:08:54.139960 8018 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:54.140068 master-0 kubenswrapper[8018]: I0217 15:08:54.140048 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xz5w\" (UniqueName: \"kubernetes.io/projected/e2994de0-1535-423a-90ce-019043cd4b9d-kube-api-access-4xz5w\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:54.140068 master-0 kubenswrapper[8018]: I0217 15:08:54.140080 8018 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2994de0-1535-423a-90ce-019043cd4b9d-utilities\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:54.678017 master-0 kubenswrapper[8018]: I0217 15:08:54.677504 8018 generic.go:334] "Generic (PLEG): container finished" podID="e2994de0-1535-423a-90ce-019043cd4b9d" containerID="a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519" exitCode=0 Feb 17 15:08:54.678017 master-0 kubenswrapper[8018]: I0217 15:08:54.677592 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft6r" Feb 17 15:08:54.678017 master-0 kubenswrapper[8018]: I0217 15:08:54.677597 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerDied","Data":"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519"} Feb 17 15:08:54.678017 master-0 kubenswrapper[8018]: I0217 15:08:54.677706 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft6r" event={"ID":"e2994de0-1535-423a-90ce-019043cd4b9d","Type":"ContainerDied","Data":"b1c523b9713fa7186f27a3debf3937c0f49ce44756f46b9804b47f1c69239b70"} Feb 17 15:08:54.678017 master-0 kubenswrapper[8018]: I0217 15:08:54.677740 8018 scope.go:117] "RemoveContainer" containerID="a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519" Feb 17 15:08:54.679390 master-0 kubenswrapper[8018]: I0217 15:08:54.679362 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:08:54.679963 master-0 kubenswrapper[8018]: I0217 15:08:54.679887 8018 generic.go:334] "Generic (PLEG): container finished" podID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerID="76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908" exitCode=1 Feb 17 15:08:54.680000 master-0 kubenswrapper[8018]: I0217 15:08:54.679959 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerDied","Data":"76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908"} Feb 17 15:08:54.680603 master-0 kubenswrapper[8018]: I0217 15:08:54.680579 8018 scope.go:117] "RemoveContainer" containerID="76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908" Feb 17 15:08:54.682870 master-0 kubenswrapper[8018]: I0217 15:08:54.682419 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7x72v" event={"ID":"2ac9a5d3-569e-4434-839e-691eacbe13df","Type":"ContainerDied","Data":"ea57ef236d3ee5f1de956103af094e831cfbfe52180fca3d3c025be0d3754a52"} Feb 17 15:08:54.682870 master-0 kubenswrapper[8018]: I0217 15:08:54.682529 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7x72v" Feb 17 15:08:54.694910 master-0 kubenswrapper[8018]: I0217 15:08:54.693425 8018 generic.go:334] "Generic (PLEG): container finished" podID="187af679-a062-4f41-81f2-33545f76febf" containerID="8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1" exitCode=0 Feb 17 15:08:54.694910 master-0 kubenswrapper[8018]: I0217 15:08:54.693535 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerDied","Data":"8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1"} Feb 17 15:08:54.694910 master-0 kubenswrapper[8018]: I0217 15:08:54.694005 8018 scope.go:117] "RemoveContainer" containerID="8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1" Feb 17 15:08:54.702663 master-0 kubenswrapper[8018]: I0217 15:08:54.700324 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/0.log" Feb 17 15:08:54.702663 master-0 kubenswrapper[8018]: I0217 15:08:54.700371 8018 generic.go:334] "Generic (PLEG): container finished" podID="071566ae-a9ae-4aa9-9dc3-38602363be72" containerID="8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144" exitCode=1 Feb 17 15:08:54.702663 master-0 kubenswrapper[8018]: I0217 15:08:54.700423 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerDied","Data":"8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144"} Feb 17 15:08:54.702663 master-0 kubenswrapper[8018]: I0217 15:08:54.700778 8018 scope.go:117] "RemoveContainer" containerID="8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144" Feb 17 15:08:54.721612 master-0 kubenswrapper[8018]: I0217 15:08:54.721553 8018 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="8e4f485693ac9a91f7bc7a84cdde902f639454acfd53f8608408575f632d2ecf" exitCode=0 Feb 17 15:08:54.731209 master-0 kubenswrapper[8018]: I0217 15:08:54.731173 8018 scope.go:117] "RemoveContainer" containerID="10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897" Feb 17 15:08:54.747640 master-0 kubenswrapper[8018]: I0217 15:08:54.747606 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:08:54.752754 master-0 kubenswrapper[8018]: I0217 15:08:54.752725 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7x72v"] Feb 17 15:08:54.783337 master-0 kubenswrapper[8018]: I0217 15:08:54.783294 8018 scope.go:117] "RemoveContainer" containerID="e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7" Feb 17 15:08:54.804009 master-0 kubenswrapper[8018]: I0217 15:08:54.803948 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:08:54.804009 master-0 kubenswrapper[8018]: I0217 15:08:54.804009 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft6r"] Feb 17 15:08:54.819328 master-0 kubenswrapper[8018]: I0217 15:08:54.819291 8018 scope.go:117] "RemoveContainer" containerID="a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519" Feb 17 15:08:54.819823 master-0 kubenswrapper[8018]: E0217 15:08:54.819782 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519\": container with ID starting with a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519 not found: ID does not exist" containerID="a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519" Feb 17 15:08:54.819902 master-0 kubenswrapper[8018]: I0217 15:08:54.819818 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519"} err="failed to get container status \"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519\": rpc error: code = NotFound desc = could not find container \"a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519\": container with ID starting with a465be9d7b9d972d86a858d1d9e92c970c7841161e35514de3f4d8707e158519 not found: ID does not exist" Feb 17 15:08:54.819902 master-0 kubenswrapper[8018]: I0217 15:08:54.819840 8018 scope.go:117] "RemoveContainer" containerID="10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897" Feb 17 15:08:54.820212 master-0 kubenswrapper[8018]: E0217 15:08:54.820144 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897\": container with ID starting with 10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897 not found: ID does not exist" containerID="10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897" Feb 17 15:08:54.820212 master-0 kubenswrapper[8018]: I0217 15:08:54.820167 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897"} err="failed to get container status \"10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897\": rpc error: code = NotFound desc = could not find container \"10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897\": container with ID starting with 10ec8802ea23c3bcc50abbeb018409267bdbe7623d5d55b117ab06c938fbf897 not found: ID does not exist" Feb 17 15:08:54.820212 master-0 kubenswrapper[8018]: I0217 15:08:54.820182 8018 scope.go:117] "RemoveContainer" containerID="e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7" Feb 17 15:08:54.820839 master-0 kubenswrapper[8018]: E0217 15:08:54.820786 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7\": container with ID starting with e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7 not found: ID does not exist" containerID="e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7" Feb 17 15:08:54.820925 master-0 kubenswrapper[8018]: I0217 15:08:54.820841 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7"} err="failed to get container status \"e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7\": rpc error: code = NotFound desc = could not find container \"e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7\": container with ID starting with e22c9e4ff50fff7f30ce4313d0cf122ce03bb31165ed1b55aded6fbdead13ac7 not found: ID does not exist" Feb 17 15:08:54.820925 master-0 kubenswrapper[8018]: I0217 15:08:54.820876 8018 scope.go:117] "RemoveContainer" containerID="7397d4596fe2a2dae9588ce30d943b39077360c93f90cf8337de17c411fc2457" Feb 17 15:08:54.847907 master-0 kubenswrapper[8018]: I0217 15:08:54.847877 8018 scope.go:117] "RemoveContainer" containerID="cd6bbd0ec3b9fb226773bb0d8576d75c0a13a8da287e310034b230507b5f7653" Feb 17 15:08:54.870543 master-0 kubenswrapper[8018]: I0217 15:08:54.870506 8018 scope.go:117] "RemoveContainer" containerID="2b94573c328e435e16466b38efd1dd63232f75cf11bf6043b00285328ed96b63" Feb 17 15:08:54.987665 master-0 kubenswrapper[8018]: I0217 15:08:54.987636 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:08:55.050692 master-0 kubenswrapper[8018]: I0217 15:08:55.050645 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 17 15:08:55.050875 master-0 kubenswrapper[8018]: I0217 15:08:55.050712 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 17 15:08:55.050875 master-0 kubenswrapper[8018]: I0217 15:08:55.050740 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 17 15:08:55.050875 master-0 kubenswrapper[8018]: I0217 15:08:55.050791 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 17 15:08:55.050875 master-0 kubenswrapper[8018]: I0217 15:08:55.050835 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 17 15:08:55.050875 master-0 kubenswrapper[8018]: I0217 15:08:55.050830 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs" (OuterVolumeSpecName: "logs") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:55.051022 master-0 kubenswrapper[8018]: I0217 15:08:55.050904 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets" (OuterVolumeSpecName: "secrets") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:55.051022 master-0 kubenswrapper[8018]: I0217 15:08:55.050934 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:55.051022 master-0 kubenswrapper[8018]: I0217 15:08:55.050959 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config" (OuterVolumeSpecName: "config") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:55.051022 master-0 kubenswrapper[8018]: I0217 15:08:55.051011 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:08:55.051330 master-0 kubenswrapper[8018]: I0217 15:08:55.051303 8018 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:55.051375 master-0 kubenswrapper[8018]: I0217 15:08:55.051330 8018 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:55.051375 master-0 kubenswrapper[8018]: I0217 15:08:55.051344 8018 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:55.051375 master-0 kubenswrapper[8018]: I0217 15:08:55.051355 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:55.051375 master-0 kubenswrapper[8018]: I0217 15:08:55.051368 8018 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 17 15:08:55.448583 master-0 kubenswrapper[8018]: I0217 15:08:55.448502 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" path="/var/lib/kubelet/pods/2ac9a5d3-569e-4434-839e-691eacbe13df/volumes" Feb 17 15:08:55.449623 master-0 kubenswrapper[8018]: I0217 15:08:55.449574 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" path="/var/lib/kubelet/pods/80420f2e7c3cdda71f7d0d6ccbe6f9f3/volumes" Feb 17 15:08:55.450755 master-0 kubenswrapper[8018]: I0217 15:08:55.450708 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" path="/var/lib/kubelet/pods/e2994de0-1535-423a-90ce-019043cd4b9d/volumes" Feb 17 15:08:55.452971 master-0 kubenswrapper[8018]: I0217 15:08:55.452931 8018 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 17 15:08:55.470249 master-0 kubenswrapper[8018]: I0217 15:08:55.470144 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 17 15:08:55.470249 master-0 kubenswrapper[8018]: I0217 15:08:55.470192 8018 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="b8796d4d-cc50-41f1-b84d-9e4a6c257509" Feb 17 15:08:55.475134 master-0 kubenswrapper[8018]: I0217 15:08:55.475085 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 17 15:08:55.475329 master-0 kubenswrapper[8018]: I0217 15:08:55.475134 8018 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="b8796d4d-cc50-41f1-b84d-9e4a6c257509" Feb 17 15:08:55.730353 master-0 kubenswrapper[8018]: I0217 15:08:55.730122 8018 scope.go:117] "RemoveContainer" containerID="b58581da3fb50e131d29602d71fb1722c3f379f23a65f6b4d93e4a6e939f2826" Feb 17 15:08:55.730353 master-0 kubenswrapper[8018]: I0217 15:08:55.730181 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 17 15:08:55.736741 master-0 kubenswrapper[8018]: I0217 15:08:55.736667 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:08:55.740425 master-0 kubenswrapper[8018]: I0217 15:08:55.740327 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerStarted","Data":"b86a492f597b80e76da870edbd5aa60b116fd208f8fcff47303644a8e0039f9b"} Feb 17 15:08:55.741019 master-0 kubenswrapper[8018]: I0217 15:08:55.740906 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:08:55.747538 master-0 kubenswrapper[8018]: I0217 15:08:55.746894 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerStarted","Data":"bfa4241e9cbb9bb3dc9c0b9ecf26410125b91a6e764bdf4080c3457126bf7fdc"} Feb 17 15:08:55.751871 master-0 kubenswrapper[8018]: I0217 15:08:55.751819 8018 scope.go:117] "RemoveContainer" containerID="8e4f485693ac9a91f7bc7a84cdde902f639454acfd53f8608408575f632d2ecf" Feb 17 15:08:55.752692 master-0 kubenswrapper[8018]: I0217 15:08:55.752641 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/0.log" Feb 17 15:08:55.752803 master-0 kubenswrapper[8018]: I0217 15:08:55.752749 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerStarted","Data":"4c47c374b75591c1874c057cb8609aad6e1b60685643b76979aadb8e2ca53712"} Feb 17 15:08:56.181993 master-0 kubenswrapper[8018]: I0217 15:08:56.181897 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:56.191286 master-0 kubenswrapper[8018]: I0217 15:08:56.191191 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:08:56.672362 master-0 kubenswrapper[8018]: I0217 15:08:56.672277 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:08:56.672609 master-0 kubenswrapper[8018]: I0217 15:08:56.672422 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:08:56.706422 master-0 kubenswrapper[8018]: I0217 15:08:56.706358 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:08:56.712032 master-0 kubenswrapper[8018]: I0217 15:08:56.712001 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:08:56.809582 master-0 kubenswrapper[8018]: I0217 15:08:56.809529 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dz667" Feb 17 15:08:56.810501 master-0 kubenswrapper[8018]: I0217 15:08:56.810409 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-c8lzf" Feb 17 15:08:56.814272 master-0 kubenswrapper[8018]: I0217 15:08:56.814242 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:08:56.814740 master-0 kubenswrapper[8018]: I0217 15:08:56.814661 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:08:57.260067 master-0 kubenswrapper[8018]: I0217 15:08:57.259993 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8vtc"] Feb 17 15:08:57.265180 master-0 kubenswrapper[8018]: W0217 15:08:57.265095 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33efa80_fbeb_438a_86e3_d22d7c12d3e9.slice/crio-46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d WatchSource:0}: Error finding container 46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d: Status 404 returned error can't find the container with id 46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d Feb 17 15:08:57.319938 master-0 kubenswrapper[8018]: I0217 15:08:57.319871 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lg56"] Feb 17 15:08:57.338950 master-0 kubenswrapper[8018]: W0217 15:08:57.338893 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc216ba1_144a_4cc8_93db_85ab558a166a.slice/crio-31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845 WatchSource:0}: Error finding container 31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845: Status 404 returned error can't find the container with id 31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845 Feb 17 15:08:57.446768 master-0 kubenswrapper[8018]: I0217 15:08:57.446705 8018 scope.go:117] "RemoveContainer" containerID="db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52" Feb 17 15:08:57.770023 master-0 kubenswrapper[8018]: I0217 15:08:57.769958 8018 generic.go:334] "Generic (PLEG): container finished" podID="fc216ba1-144a-4cc8-93db-85ab558a166a" containerID="20e73b882d712a2eff1c90da1b92bbca3203e89b488f6982191f5a6e45f5694f" exitCode=0 Feb 17 15:08:57.770023 master-0 kubenswrapper[8018]: I0217 15:08:57.770032 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lg56" event={"ID":"fc216ba1-144a-4cc8-93db-85ab558a166a","Type":"ContainerDied","Data":"20e73b882d712a2eff1c90da1b92bbca3203e89b488f6982191f5a6e45f5694f"} Feb 17 15:08:57.770023 master-0 kubenswrapper[8018]: I0217 15:08:57.770059 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lg56" event={"ID":"fc216ba1-144a-4cc8-93db-85ab558a166a","Type":"ContainerStarted","Data":"31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845"} Feb 17 15:08:57.772067 master-0 kubenswrapper[8018]: I0217 15:08:57.771930 8018 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:08:57.772373 master-0 kubenswrapper[8018]: I0217 15:08:57.772317 8018 generic.go:334] "Generic (PLEG): container finished" podID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" containerID="699c72ab46ee0eb32b4612336334e94bd1b80ff4aefacb6b8eb9094947e725a5" exitCode=0 Feb 17 15:08:57.772373 master-0 kubenswrapper[8018]: I0217 15:08:57.772374 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8vtc" event={"ID":"c33efa80-fbeb-438a-86e3-d22d7c12d3e9","Type":"ContainerDied","Data":"699c72ab46ee0eb32b4612336334e94bd1b80ff4aefacb6b8eb9094947e725a5"} Feb 17 15:08:57.772665 master-0 kubenswrapper[8018]: I0217 15:08:57.772395 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8vtc" event={"ID":"c33efa80-fbeb-438a-86e3-d22d7c12d3e9","Type":"ContainerStarted","Data":"46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d"} Feb 17 15:08:57.777225 master-0 kubenswrapper[8018]: I0217 15:08:57.777196 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"590e8fe24ffb416ddbf90918b458930e7fec94c62687bb9e8c21a6053d7a588b"} Feb 17 15:08:58.441409 master-0 kubenswrapper[8018]: I0217 15:08:58.439774 8018 scope.go:117] "RemoveContainer" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" Feb 17 15:08:58.790003 master-0 kubenswrapper[8018]: I0217 15:08:58.789808 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lg56" event={"ID":"fc216ba1-144a-4cc8-93db-85ab558a166a","Type":"ContainerStarted","Data":"dff43540c3d3c78b976c453950a947c70e5ecf684af153fa53013b3b0706b588"} Feb 17 15:08:58.792791 master-0 kubenswrapper[8018]: I0217 15:08:58.792553 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8vtc" event={"ID":"c33efa80-fbeb-438a-86e3-d22d7c12d3e9","Type":"ContainerStarted","Data":"b4983b136a273fbed3a16f2bc55aeaf26026f904d63f46d8bea39f01aefc2517"} Feb 17 15:08:58.795484 master-0 kubenswrapper[8018]: I0217 15:08:58.795299 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/2.log" Feb 17 15:08:58.795484 master-0 kubenswrapper[8018]: I0217 15:08:58.795365 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"ef80e89f464f2fddabc8382f1aaea540a66323e02f01f8d399ba62bafcf783cc"} Feb 17 15:08:59.061027 master-0 kubenswrapper[8018]: I0217 15:08:59.060862 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:08:59.802351 master-0 kubenswrapper[8018]: I0217 15:08:59.802288 8018 generic.go:334] "Generic (PLEG): container finished" podID="fc216ba1-144a-4cc8-93db-85ab558a166a" containerID="dff43540c3d3c78b976c453950a947c70e5ecf684af153fa53013b3b0706b588" exitCode=0 Feb 17 15:08:59.802351 master-0 kubenswrapper[8018]: I0217 15:08:59.802335 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lg56" event={"ID":"fc216ba1-144a-4cc8-93db-85ab558a166a","Type":"ContainerDied","Data":"dff43540c3d3c78b976c453950a947c70e5ecf684af153fa53013b3b0706b588"} Feb 17 15:08:59.804826 master-0 kubenswrapper[8018]: I0217 15:08:59.804793 8018 generic.go:334] "Generic (PLEG): container finished" podID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" containerID="b4983b136a273fbed3a16f2bc55aeaf26026f904d63f46d8bea39f01aefc2517" exitCode=0 Feb 17 15:08:59.804953 master-0 kubenswrapper[8018]: I0217 15:08:59.804910 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8vtc" event={"ID":"c33efa80-fbeb-438a-86e3-d22d7c12d3e9","Type":"ContainerDied","Data":"b4983b136a273fbed3a16f2bc55aeaf26026f904d63f46d8bea39f01aefc2517"} Feb 17 15:09:00.439658 master-0 kubenswrapper[8018]: I0217 15:09:00.439548 8018 scope.go:117] "RemoveContainer" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" Feb 17 15:09:00.439940 master-0 kubenswrapper[8018]: E0217 15:09:00.439869 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-p5mdv_openshift-kube-apiserver-operator(e259b5a1-837b-4cde-85f7-cd5781af08bd)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" podUID="e259b5a1-837b-4cde-85f7-cd5781af08bd" Feb 17 15:09:00.797933 master-0 kubenswrapper[8018]: E0217 15:09:00.797736 8018 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-sft6r.189510f289268823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-sft6r,UID:e2994de0-1535-423a-90ce-019043cd4b9d,APIVersion:v1,ResourceVersion:7219,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 38.158s (38.158s including waiting). Image size: 1201887930 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:04:22.309292067 +0000 UTC m=+95.061635157,LastTimestamp:2026-02-17 15:04:22.309292067 +0000 UTC m=+95.061635157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:09:00.814734 master-0 kubenswrapper[8018]: I0217 15:09:00.814681 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lg56" event={"ID":"fc216ba1-144a-4cc8-93db-85ab558a166a","Type":"ContainerStarted","Data":"04c831ee22eaf6173fb39cdecba525a13c79fa94b18b42ecdcadc55a1f02569a"} Feb 17 15:09:00.818786 master-0 kubenswrapper[8018]: I0217 15:09:00.818729 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8vtc" event={"ID":"c33efa80-fbeb-438a-86e3-d22d7c12d3e9","Type":"ContainerStarted","Data":"ff92864b65ac7c643232141d0d3b5031b5534c63fd34043fa5aec015f1836925"} Feb 17 15:09:00.847822 master-0 kubenswrapper[8018]: I0217 15:09:00.847715 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2lg56" podStartSLOduration=304.426401399 podStartE2EDuration="5m6.847686806s" podCreationTimestamp="2026-02-17 15:03:54 +0000 UTC" firstStartedPulling="2026-02-17 15:08:57.771878319 +0000 UTC m=+370.524221389" lastFinishedPulling="2026-02-17 15:09:00.193163736 +0000 UTC m=+372.945506796" observedRunningTime="2026-02-17 15:09:00.842910869 +0000 UTC m=+373.595253969" watchObservedRunningTime="2026-02-17 15:09:00.847686806 +0000 UTC m=+373.600029896" Feb 17 15:09:00.880630 master-0 kubenswrapper[8018]: I0217 15:09:00.880500 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t8vtc" podStartSLOduration=304.434353803 podStartE2EDuration="5m6.880441899s" podCreationTimestamp="2026-02-17 15:03:54 +0000 UTC" firstStartedPulling="2026-02-17 15:08:57.773737215 +0000 UTC m=+370.526080275" lastFinishedPulling="2026-02-17 15:09:00.219825301 +0000 UTC m=+372.972168371" observedRunningTime="2026-02-17 15:09:00.875100149 +0000 UTC m=+373.627443249" watchObservedRunningTime="2026-02-17 15:09:00.880441899 +0000 UTC m=+373.632784989" Feb 17 15:09:01.440617 master-0 kubenswrapper[8018]: I0217 15:09:01.440530 8018 scope.go:117] "RemoveContainer" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" Feb 17 15:09:01.440943 master-0 kubenswrapper[8018]: I0217 15:09:01.440703 8018 scope.go:117] "RemoveContainer" containerID="477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f" Feb 17 15:09:01.440943 master-0 kubenswrapper[8018]: E0217 15:09:01.440858 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" podUID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" Feb 17 15:09:01.825777 master-0 kubenswrapper[8018]: I0217 15:09:01.825701 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:09:01.826395 master-0 kubenswrapper[8018]: I0217 15:09:01.825816 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa"} Feb 17 15:09:02.440608 master-0 kubenswrapper[8018]: I0217 15:09:02.440543 8018 scope.go:117] "RemoveContainer" containerID="6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4" Feb 17 15:09:02.842040 master-0 kubenswrapper[8018]: I0217 15:09:02.841978 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/1.log" Feb 17 15:09:02.842878 master-0 kubenswrapper[8018]: I0217 15:09:02.842070 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"3d42744bc55ffdd0ef5a58be1827ed2cd005681379705cfa9b05d7d0639649ee"} Feb 17 15:09:03.441549 master-0 kubenswrapper[8018]: I0217 15:09:03.441505 8018 scope.go:117] "RemoveContainer" containerID="398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c" Feb 17 15:09:03.443063 master-0 kubenswrapper[8018]: I0217 15:09:03.442421 8018 scope.go:117] "RemoveContainer" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" Feb 17 15:09:03.443430 master-0 kubenswrapper[8018]: E0217 15:09:03.443360 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-tckph_openshift-kube-storage-version-migrator-operator(0c58265d-32fb-4cf0-97d8-6c9a5d37fad9)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" podUID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" Feb 17 15:09:03.848645 master-0 kubenswrapper[8018]: I0217 15:09:03.848519 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:09:03.848645 master-0 kubenswrapper[8018]: I0217 15:09:03.848595 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a"} Feb 17 15:09:04.440926 master-0 kubenswrapper[8018]: I0217 15:09:04.440669 8018 scope.go:117] "RemoveContainer" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" Feb 17 15:09:04.441318 master-0 kubenswrapper[8018]: E0217 15:09:04.441264 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-xvzq9_openshift-kube-controller-manager-operator(553d4535-9985-47e2-83ee-8fcfb6035e7b)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" podUID="553d4535-9985-47e2-83ee-8fcfb6035e7b" Feb 17 15:09:05.439605 master-0 kubenswrapper[8018]: I0217 15:09:05.439563 8018 scope.go:117] "RemoveContainer" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" Feb 17 15:09:05.440430 master-0 kubenswrapper[8018]: E0217 15:09:05.439813 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" podUID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" Feb 17 15:09:06.814920 master-0 kubenswrapper[8018]: I0217 15:09:06.814850 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:09:06.815631 master-0 kubenswrapper[8018]: I0217 15:09:06.815183 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:09:06.815631 master-0 kubenswrapper[8018]: I0217 15:09:06.815211 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:09:06.816407 master-0 kubenswrapper[8018]: I0217 15:09:06.816293 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:09:06.862907 master-0 kubenswrapper[8018]: I0217 15:09:06.862844 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:09:06.880660 master-0 kubenswrapper[8018]: I0217 15:09:06.880522 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:09:06.934416 master-0 kubenswrapper[8018]: I0217 15:09:06.934123 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:09:07.444385 master-0 kubenswrapper[8018]: I0217 15:09:07.444310 8018 scope.go:117] "RemoveContainer" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" Feb 17 15:09:07.872852 master-0 kubenswrapper[8018]: I0217 15:09:07.872767 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2"} Feb 17 15:09:07.915262 master-0 kubenswrapper[8018]: I0217 15:09:07.915215 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:09:08.439668 master-0 kubenswrapper[8018]: I0217 15:09:08.439602 8018 scope.go:117] "RemoveContainer" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" Feb 17 15:09:08.439920 master-0 kubenswrapper[8018]: E0217 15:09:08.439827 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-67bf55ccdd-pjm6n_openshift-etcd-operator(f2546ffc-8d0a-4010-a3bd-9e69b6dbea40)\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" Feb 17 15:09:08.798036 master-0 kubenswrapper[8018]: I0217 15:09:08.797888 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:09:09.751010 master-0 kubenswrapper[8018]: I0217 15:09:09.750924 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:09:11.803371 master-0 kubenswrapper[8018]: I0217 15:09:11.803301 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:09:11.838388 master-0 kubenswrapper[8018]: I0217 15:09:11.838291 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.838267424 podStartE2EDuration="2.838267424s" podCreationTimestamp="2026-02-17 15:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:11.83404522 +0000 UTC m=+384.586388280" watchObservedRunningTime="2026-02-17 15:09:11.838267424 +0000 UTC m=+384.590610484" Feb 17 15:09:12.097295 master-0 kubenswrapper[8018]: I0217 15:09:12.097106 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7dzgz"] Feb 17 15:09:12.097570 master-0 kubenswrapper[8018]: E0217 15:09:12.097485 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="extract-utilities" Feb 17 15:09:12.097570 master-0 kubenswrapper[8018]: I0217 15:09:12.097509 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="extract-utilities" Feb 17 15:09:12.097570 master-0 kubenswrapper[8018]: E0217 15:09:12.097537 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:09:12.097570 master-0 kubenswrapper[8018]: I0217 15:09:12.097551 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: E0217 15:09:12.097575 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="extract-content" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097592 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="extract-content" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: E0217 15:09:12.097609 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="registry-server" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097622 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="registry-server" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: E0217 15:09:12.097642 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="extract-utilities" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097656 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="extract-utilities" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: E0217 15:09:12.097679 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="extract-content" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097693 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="extract-content" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: E0217 15:09:12.097716 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="registry-server" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097729 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="registry-server" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097873 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2994de0-1535-423a-90ce-019043cd4b9d" containerName="registry-server" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097900 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:09:12.097967 master-0 kubenswrapper[8018]: I0217 15:09:12.097927 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac9a5d3-569e-4434-839e-691eacbe13df" containerName="registry-server" Feb 17 15:09:12.099256 master-0 kubenswrapper[8018]: I0217 15:09:12.099189 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzsv7"] Feb 17 15:09:12.099406 master-0 kubenswrapper[8018]: I0217 15:09:12.099362 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.100715 master-0 kubenswrapper[8018]: I0217 15:09:12.100650 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.101852 master-0 kubenswrapper[8018]: I0217 15:09:12.101783 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7f2w9" Feb 17 15:09:12.102742 master-0 kubenswrapper[8018]: I0217 15:09:12.102692 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-t5n74" Feb 17 15:09:12.118083 master-0 kubenswrapper[8018]: I0217 15:09:12.118035 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dzgz"] Feb 17 15:09:12.123153 master-0 kubenswrapper[8018]: I0217 15:09:12.123095 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsv7"] Feb 17 15:09:12.173781 master-0 kubenswrapper[8018]: I0217 15:09:12.173687 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.173781 master-0 kubenswrapper[8018]: I0217 15:09:12.173750 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.173781 master-0 kubenswrapper[8018]: I0217 15:09:12.173784 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmdw\" (UniqueName: \"kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.174118 master-0 kubenswrapper[8018]: I0217 15:09:12.173835 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.174118 master-0 kubenswrapper[8018]: I0217 15:09:12.173917 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghlk\" (UniqueName: \"kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.174118 master-0 kubenswrapper[8018]: I0217 15:09:12.174036 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.275546 master-0 kubenswrapper[8018]: I0217 15:09:12.275438 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghlk\" (UniqueName: \"kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.275754 master-0 kubenswrapper[8018]: I0217 15:09:12.275577 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.275754 master-0 kubenswrapper[8018]: I0217 15:09:12.275668 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.275906 master-0 kubenswrapper[8018]: I0217 15:09:12.275838 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.276009 master-0 kubenswrapper[8018]: I0217 15:09:12.275969 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmdw\" (UniqueName: \"kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.276078 master-0 kubenswrapper[8018]: I0217 15:09:12.276051 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.276383 master-0 kubenswrapper[8018]: I0217 15:09:12.276344 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.276594 master-0 kubenswrapper[8018]: I0217 15:09:12.276557 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.276869 master-0 kubenswrapper[8018]: I0217 15:09:12.276814 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.277067 master-0 kubenswrapper[8018]: I0217 15:09:12.277028 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.305884 master-0 kubenswrapper[8018]: I0217 15:09:12.305798 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmdw\" (UniqueName: \"kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.306319 master-0 kubenswrapper[8018]: I0217 15:09:12.306265 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghlk\" (UniqueName: \"kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.431357 master-0 kubenswrapper[8018]: I0217 15:09:12.431277 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:12.452526 master-0 kubenswrapper[8018]: I0217 15:09:12.452398 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:12.859413 master-0 kubenswrapper[8018]: I0217 15:09:12.859355 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dzgz"] Feb 17 15:09:12.865558 master-0 kubenswrapper[8018]: W0217 15:09:12.865499 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f5fac8_582e_44a3_8dd5_c4e6e80829ef.slice/crio-f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339 WatchSource:0}: Error finding container f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339: Status 404 returned error can't find the container with id f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339 Feb 17 15:09:12.911602 master-0 kubenswrapper[8018]: I0217 15:09:12.910657 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dzgz" event={"ID":"94f5fac8-582e-44a3-8dd5-c4e6e80829ef","Type":"ContainerStarted","Data":"f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339"} Feb 17 15:09:12.913879 master-0 kubenswrapper[8018]: I0217 15:09:12.913835 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsv7"] Feb 17 15:09:12.925085 master-0 kubenswrapper[8018]: W0217 15:09:12.925033 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833c8661_28ca_463a_ac61_6edb961056e3.slice/crio-f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a WatchSource:0}: Error finding container f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a: Status 404 returned error can't find the container with id f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a Feb 17 15:09:13.440757 master-0 kubenswrapper[8018]: I0217 15:09:13.440701 8018 scope.go:117] "RemoveContainer" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" Feb 17 15:09:13.939644 master-0 kubenswrapper[8018]: I0217 15:09:13.939556 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:09:13.940536 master-0 kubenswrapper[8018]: I0217 15:09:13.939762 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470"} Feb 17 15:09:13.942168 master-0 kubenswrapper[8018]: I0217 15:09:13.942003 8018 generic.go:334] "Generic (PLEG): container finished" podID="833c8661-28ca-463a-ac61-6edb961056e3" containerID="e6161530d918faa82eec69639876fbb5e67758f6bda51a345c33a6aeb147dce2" exitCode=0 Feb 17 15:09:13.942279 master-0 kubenswrapper[8018]: I0217 15:09:13.942099 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsv7" event={"ID":"833c8661-28ca-463a-ac61-6edb961056e3","Type":"ContainerDied","Data":"e6161530d918faa82eec69639876fbb5e67758f6bda51a345c33a6aeb147dce2"} Feb 17 15:09:13.942363 master-0 kubenswrapper[8018]: I0217 15:09:13.942315 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsv7" event={"ID":"833c8661-28ca-463a-ac61-6edb961056e3","Type":"ContainerStarted","Data":"f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a"} Feb 17 15:09:13.945164 master-0 kubenswrapper[8018]: I0217 15:09:13.945104 8018 generic.go:334] "Generic (PLEG): container finished" podID="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" containerID="ce87d71e88525ce7001016bad4c33c6d78f8709a4b105679be6b276fa78e4ee0" exitCode=0 Feb 17 15:09:13.945242 master-0 kubenswrapper[8018]: I0217 15:09:13.945167 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dzgz" event={"ID":"94f5fac8-582e-44a3-8dd5-c4e6e80829ef","Type":"ContainerDied","Data":"ce87d71e88525ce7001016bad4c33c6d78f8709a4b105679be6b276fa78e4ee0"} Feb 17 15:09:14.440238 master-0 kubenswrapper[8018]: I0217 15:09:14.440141 8018 scope.go:117] "RemoveContainer" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" Feb 17 15:09:14.953680 master-0 kubenswrapper[8018]: I0217 15:09:14.953641 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:09:14.954234 master-0 kubenswrapper[8018]: I0217 15:09:14.953706 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5"} Feb 17 15:09:16.440401 master-0 kubenswrapper[8018]: I0217 15:09:16.440335 8018 scope.go:117] "RemoveContainer" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" Feb 17 15:09:16.441239 master-0 kubenswrapper[8018]: I0217 15:09:16.440880 8018 scope.go:117] "RemoveContainer" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" Feb 17 15:09:16.966416 master-0 kubenswrapper[8018]: I0217 15:09:16.966229 8018 generic.go:334] "Generic (PLEG): container finished" podID="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" containerID="27d6533353fb312399276ec154189748ef75e2ff2e683e4077e0613293d79e27" exitCode=0 Feb 17 15:09:16.966416 master-0 kubenswrapper[8018]: I0217 15:09:16.966273 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dzgz" event={"ID":"94f5fac8-582e-44a3-8dd5-c4e6e80829ef","Type":"ContainerDied","Data":"27d6533353fb312399276ec154189748ef75e2ff2e683e4077e0613293d79e27"} Feb 17 15:09:16.969923 master-0 kubenswrapper[8018]: I0217 15:09:16.968849 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/2.log" Feb 17 15:09:16.969923 master-0 kubenswrapper[8018]: I0217 15:09:16.968920 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e"} Feb 17 15:09:16.973307 master-0 kubenswrapper[8018]: I0217 15:09:16.973074 8018 generic.go:334] "Generic (PLEG): container finished" podID="833c8661-28ca-463a-ac61-6edb961056e3" containerID="366ce4a350e8c8c3fa7539745bb67d208d67dd372e70a046a0ec8b361945197b" exitCode=0 Feb 17 15:09:16.973307 master-0 kubenswrapper[8018]: I0217 15:09:16.973141 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsv7" event={"ID":"833c8661-28ca-463a-ac61-6edb961056e3","Type":"ContainerDied","Data":"366ce4a350e8c8c3fa7539745bb67d208d67dd372e70a046a0ec8b361945197b"} Feb 17 15:09:16.978352 master-0 kubenswrapper[8018]: I0217 15:09:16.978302 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:09:16.978352 master-0 kubenswrapper[8018]: I0217 15:09:16.978381 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48"} Feb 17 15:09:17.444846 master-0 kubenswrapper[8018]: I0217 15:09:17.444791 8018 scope.go:117] "RemoveContainer" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" Feb 17 15:09:17.711540 master-0 kubenswrapper[8018]: I0217 15:09:17.711502 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7"] Feb 17 15:09:17.712348 master-0 kubenswrapper[8018]: I0217 15:09:17.712328 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.722416 master-0 kubenswrapper[8018]: I0217 15:09:17.722392 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:09:17.722706 master-0 kubenswrapper[8018]: I0217 15:09:17.722661 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-fg558" Feb 17 15:09:17.723061 master-0 kubenswrapper[8018]: I0217 15:09:17.723045 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:09:17.732425 master-0 kubenswrapper[8018]: I0217 15:09:17.727674 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:09:17.762924 master-0 kubenswrapper[8018]: I0217 15:09:17.762700 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.762924 master-0 kubenswrapper[8018]: I0217 15:09:17.762747 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.762924 master-0 kubenswrapper[8018]: I0217 15:09:17.762788 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.762924 master-0 kubenswrapper[8018]: I0217 15:09:17.762807 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.762924 master-0 kubenswrapper[8018]: I0217 15:09:17.762826 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.864139 master-0 kubenswrapper[8018]: I0217 15:09:17.864107 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.864350 master-0 kubenswrapper[8018]: I0217 15:09:17.864333 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.864503 master-0 kubenswrapper[8018]: I0217 15:09:17.864483 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.864670 master-0 kubenswrapper[8018]: I0217 15:09:17.864614 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.864670 master-0 kubenswrapper[8018]: I0217 15:09:17.864558 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.865271 master-0 kubenswrapper[8018]: I0217 15:09:17.864823 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.865271 master-0 kubenswrapper[8018]: I0217 15:09:17.864879 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.865357 master-0 kubenswrapper[8018]: I0217 15:09:17.865310 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.867830 master-0 kubenswrapper[8018]: I0217 15:09:17.867815 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.884308 master-0 kubenswrapper[8018]: I0217 15:09:17.884268 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:17.985622 master-0 kubenswrapper[8018]: I0217 15:09:17.985536 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:09:17.985879 master-0 kubenswrapper[8018]: I0217 15:09:17.985856 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036"} Feb 17 15:09:17.988146 master-0 kubenswrapper[8018]: I0217 15:09:17.987669 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dzgz" event={"ID":"94f5fac8-582e-44a3-8dd5-c4e6e80829ef","Type":"ContainerStarted","Data":"c7264c0105e7b29734b470c08ed45767132a7f5ff93a133a47ac2396a36486ba"} Feb 17 15:09:17.989753 master-0 kubenswrapper[8018]: I0217 15:09:17.989712 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsv7" event={"ID":"833c8661-28ca-463a-ac61-6edb961056e3","Type":"ContainerStarted","Data":"e4d1967a79b0f3e5ffa1b69a992d773d5e64e5a9bb7298989f0bae8a3903d2e8"} Feb 17 15:09:18.025210 master-0 kubenswrapper[8018]: I0217 15:09:18.024475 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzsv7" podStartSLOduration=21.236023582 podStartE2EDuration="25.024446975s" podCreationTimestamp="2026-02-17 15:08:53 +0000 UTC" firstStartedPulling="2026-02-17 15:09:13.944889924 +0000 UTC m=+386.697233014" lastFinishedPulling="2026-02-17 15:09:17.733313357 +0000 UTC m=+390.485656407" observedRunningTime="2026-02-17 15:09:18.022652741 +0000 UTC m=+390.774995791" watchObservedRunningTime="2026-02-17 15:09:18.024446975 +0000 UTC m=+390.776790025" Feb 17 15:09:18.041314 master-0 kubenswrapper[8018]: I0217 15:09:18.041246 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7dzgz" podStartSLOduration=21.291820653 podStartE2EDuration="25.041225297s" podCreationTimestamp="2026-02-17 15:08:53 +0000 UTC" firstStartedPulling="2026-02-17 15:09:13.94796748 +0000 UTC m=+386.700310570" lastFinishedPulling="2026-02-17 15:09:17.697372154 +0000 UTC m=+390.449715214" observedRunningTime="2026-02-17 15:09:18.03930052 +0000 UTC m=+390.791643570" watchObservedRunningTime="2026-02-17 15:09:18.041225297 +0000 UTC m=+390.793568347" Feb 17 15:09:18.052302 master-0 kubenswrapper[8018]: I0217 15:09:18.052263 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:09:18.071956 master-0 kubenswrapper[8018]: W0217 15:09:18.071851 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod626c4f7a_59ee_45da_9198_05dd2c42ac42.slice/crio-c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e WatchSource:0}: Error finding container c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e: Status 404 returned error can't find the container with id c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e Feb 17 15:09:18.997107 master-0 kubenswrapper[8018]: I0217 15:09:18.997011 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" event={"ID":"626c4f7a-59ee-45da-9198-05dd2c42ac42","Type":"ContainerStarted","Data":"98474fa2fe73c4db5804824208857baff7e2d6a53dfa4d32d3b7d0f00e99e897"} Feb 17 15:09:18.997107 master-0 kubenswrapper[8018]: I0217 15:09:18.997093 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" event={"ID":"626c4f7a-59ee-45da-9198-05dd2c42ac42","Type":"ContainerStarted","Data":"c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e"} Feb 17 15:09:19.019965 master-0 kubenswrapper[8018]: I0217 15:09:19.019815 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" podStartSLOduration=2.019790903 podStartE2EDuration="2.019790903s" podCreationTimestamp="2026-02-17 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:19.017636139 +0000 UTC m=+391.769979209" watchObservedRunningTime="2026-02-17 15:09:19.019790903 +0000 UTC m=+391.772133993" Feb 17 15:09:19.440771 master-0 kubenswrapper[8018]: I0217 15:09:19.440707 8018 scope.go:117] "RemoveContainer" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" Feb 17 15:09:20.003849 master-0 kubenswrapper[8018]: I0217 15:09:20.003703 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:09:20.003849 master-0 kubenswrapper[8018]: I0217 15:09:20.003760 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a"} Feb 17 15:09:22.431960 master-0 kubenswrapper[8018]: I0217 15:09:22.431877 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:22.433012 master-0 kubenswrapper[8018]: I0217 15:09:22.432977 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:22.453550 master-0 kubenswrapper[8018]: I0217 15:09:22.453509 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:22.453793 master-0 kubenswrapper[8018]: I0217 15:09:22.453769 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:22.511979 master-0 kubenswrapper[8018]: I0217 15:09:22.511903 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:23.095371 master-0 kubenswrapper[8018]: I0217 15:09:23.095314 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:09:23.515232 master-0 kubenswrapper[8018]: I0217 15:09:23.515166 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wzsv7" podUID="833c8661-28ca-463a-ac61-6edb961056e3" containerName="registry-server" probeResult="failure" output=< Feb 17 15:09:23.515232 master-0 kubenswrapper[8018]: timeout: failed to connect service ":50051" within 1s Feb 17 15:09:23.515232 master-0 kubenswrapper[8018]: > Feb 17 15:09:29.848835 master-0 kubenswrapper[8018]: I0217 15:09:29.848728 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7"] Feb 17 15:09:29.849907 master-0 kubenswrapper[8018]: I0217 15:09:29.849774 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:29.852362 master-0 kubenswrapper[8018]: I0217 15:09:29.852302 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-bw92c" Feb 17 15:09:29.852627 master-0 kubenswrapper[8018]: I0217 15:09:29.852565 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:09:29.852875 master-0 kubenswrapper[8018]: I0217 15:09:29.852838 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:09:29.853135 master-0 kubenswrapper[8018]: I0217 15:09:29.853087 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:09:29.909523 master-0 kubenswrapper[8018]: I0217 15:09:29.909181 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7"] Feb 17 15:09:30.037965 master-0 kubenswrapper[8018]: I0217 15:09:30.037901 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27gfx\" (UniqueName: \"kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.038197 master-0 kubenswrapper[8018]: I0217 15:09:30.038033 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.138960 master-0 kubenswrapper[8018]: I0217 15:09:30.138854 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.139235 master-0 kubenswrapper[8018]: I0217 15:09:30.138987 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27gfx\" (UniqueName: \"kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.145011 master-0 kubenswrapper[8018]: I0217 15:09:30.144899 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.164692 master-0 kubenswrapper[8018]: I0217 15:09:30.164595 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27gfx\" (UniqueName: \"kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.179085 master-0 kubenswrapper[8018]: I0217 15:09:30.178989 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:09:30.503349 master-0 kubenswrapper[8018]: I0217 15:09:30.503189 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:09:30.631887 master-0 kubenswrapper[8018]: I0217 15:09:30.631808 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7"] Feb 17 15:09:30.638516 master-0 kubenswrapper[8018]: W0217 15:09:30.638393 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4422676_9a70_4973_8299_7b40a66e9c96.slice/crio-1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d WatchSource:0}: Error finding container 1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d: Status 404 returned error can't find the container with id 1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d Feb 17 15:09:31.073983 master-0 kubenswrapper[8018]: I0217 15:09:31.073867 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" event={"ID":"b4422676-9a70-4973-8299-7b40a66e9c96","Type":"ContainerStarted","Data":"1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d"} Feb 17 15:09:32.502022 master-0 kubenswrapper[8018]: I0217 15:09:32.501978 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:32.542413 master-0 kubenswrapper[8018]: I0217 15:09:32.542362 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:09:33.086199 master-0 kubenswrapper[8018]: I0217 15:09:33.086105 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" event={"ID":"b4422676-9a70-4973-8299-7b40a66e9c96","Type":"ContainerStarted","Data":"b1199a6a02a6f0066cde070bc688012a60c6dbb64c28d3d555d30add6fcebc27"} Feb 17 15:09:33.108675 master-0 kubenswrapper[8018]: I0217 15:09:33.108556 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" podStartSLOduration=2.26404709 podStartE2EDuration="4.108526514s" podCreationTimestamp="2026-02-17 15:09:29 +0000 UTC" firstStartedPulling="2026-02-17 15:09:30.641066185 +0000 UTC m=+403.393409245" lastFinishedPulling="2026-02-17 15:09:32.485545619 +0000 UTC m=+405.237888669" observedRunningTime="2026-02-17 15:09:33.108154046 +0000 UTC m=+405.860497126" watchObservedRunningTime="2026-02-17 15:09:33.108526514 +0000 UTC m=+405.860869604" Feb 17 15:09:34.172855 master-0 kubenswrapper[8018]: I0217 15:09:34.172772 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx"] Feb 17 15:09:34.173852 master-0 kubenswrapper[8018]: I0217 15:09:34.173815 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.176673 master-0 kubenswrapper[8018]: I0217 15:09:34.176606 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:09:34.177058 master-0 kubenswrapper[8018]: I0217 15:09:34.177014 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:09:34.178242 master-0 kubenswrapper[8018]: I0217 15:09:34.178148 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:09:34.178643 master-0 kubenswrapper[8018]: I0217 15:09:34.178601 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:09:34.178726 master-0 kubenswrapper[8018]: I0217 15:09:34.178678 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:09:34.182946 master-0 kubenswrapper[8018]: I0217 15:09:34.182897 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:09:34.295686 master-0 kubenswrapper[8018]: I0217 15:09:34.295178 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.295686 master-0 kubenswrapper[8018]: I0217 15:09:34.295328 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hk8s\" (UniqueName: \"kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.295686 master-0 kubenswrapper[8018]: I0217 15:09:34.295528 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.295686 master-0 kubenswrapper[8018]: I0217 15:09:34.295626 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.397837 master-0 kubenswrapper[8018]: I0217 15:09:34.397515 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.397837 master-0 kubenswrapper[8018]: I0217 15:09:34.397638 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hk8s\" (UniqueName: \"kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.397837 master-0 kubenswrapper[8018]: I0217 15:09:34.397703 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.397837 master-0 kubenswrapper[8018]: I0217 15:09:34.397740 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.398491 master-0 kubenswrapper[8018]: E0217 15:09:34.398038 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:34.398491 master-0 kubenswrapper[8018]: E0217 15:09:34.398157 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:09:34.898126176 +0000 UTC m=+407.650469226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:34.398855 master-0 kubenswrapper[8018]: I0217 15:09:34.398776 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.399332 master-0 kubenswrapper[8018]: I0217 15:09:34.399279 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.439004 master-0 kubenswrapper[8018]: I0217 15:09:34.438122 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hk8s\" (UniqueName: \"kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.903483 master-0 kubenswrapper[8018]: I0217 15:09:34.903391 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:34.903821 master-0 kubenswrapper[8018]: E0217 15:09:34.903736 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:34.903900 master-0 kubenswrapper[8018]: E0217 15:09:34.903866 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:09:35.903833942 +0000 UTC m=+408.656177022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:35.916070 master-0 kubenswrapper[8018]: I0217 15:09:35.915956 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:35.917138 master-0 kubenswrapper[8018]: E0217 15:09:35.916183 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:35.917138 master-0 kubenswrapper[8018]: E0217 15:09:35.916279 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:09:37.916254809 +0000 UTC m=+410.668597889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:36.387721 master-0 kubenswrapper[8018]: I0217 15:09:36.387672 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc"] Feb 17 15:09:36.390255 master-0 kubenswrapper[8018]: I0217 15:09:36.390234 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.393337 master-0 kubenswrapper[8018]: I0217 15:09:36.392608 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 17 15:09:36.393337 master-0 kubenswrapper[8018]: I0217 15:09:36.392660 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 17 15:09:36.398773 master-0 kubenswrapper[8018]: I0217 15:09:36.396555 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-kcv7p" Feb 17 15:09:36.398773 master-0 kubenswrapper[8018]: I0217 15:09:36.397049 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 17 15:09:36.398773 master-0 kubenswrapper[8018]: I0217 15:09:36.397163 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 17 15:09:36.398773 master-0 kubenswrapper[8018]: I0217 15:09:36.398651 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc"] Feb 17 15:09:36.422702 master-0 kubenswrapper[8018]: I0217 15:09:36.422641 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.422702 master-0 kubenswrapper[8018]: I0217 15:09:36.422699 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrph\" (UniqueName: \"kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.422948 master-0 kubenswrapper[8018]: I0217 15:09:36.422736 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.524231 master-0 kubenswrapper[8018]: I0217 15:09:36.524142 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.524231 master-0 kubenswrapper[8018]: I0217 15:09:36.524214 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrph\" (UniqueName: \"kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.524838 master-0 kubenswrapper[8018]: I0217 15:09:36.524794 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.525047 master-0 kubenswrapper[8018]: E0217 15:09:36.525012 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:36.525110 master-0 kubenswrapper[8018]: E0217 15:09:36.525092 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:37.025067996 +0000 UTC m=+409.777411086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:36.525368 master-0 kubenswrapper[8018]: I0217 15:09:36.525304 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:36.543589 master-0 kubenswrapper[8018]: I0217 15:09:36.543564 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrph\" (UniqueName: \"kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:37.030561 master-0 kubenswrapper[8018]: I0217 15:09:37.030416 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:37.031406 master-0 kubenswrapper[8018]: E0217 15:09:37.030711 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:37.031406 master-0 kubenswrapper[8018]: E0217 15:09:37.030861 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:38.030826294 +0000 UTC m=+410.783169384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:37.694500 master-0 kubenswrapper[8018]: I0217 15:09:37.693295 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4"] Feb 17 15:09:37.694500 master-0 kubenswrapper[8018]: I0217 15:09:37.694377 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.697333 master-0 kubenswrapper[8018]: I0217 15:09:37.697208 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:09:37.697442 master-0 kubenswrapper[8018]: I0217 15:09:37.697340 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:09:37.698425 master-0 kubenswrapper[8018]: I0217 15:09:37.697780 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:09:37.699801 master-0 kubenswrapper[8018]: I0217 15:09:37.699749 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtqvr" Feb 17 15:09:37.720945 master-0 kubenswrapper[8018]: I0217 15:09:37.719913 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4"] Feb 17 15:09:37.740035 master-0 kubenswrapper[8018]: I0217 15:09:37.739971 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.740261 master-0 kubenswrapper[8018]: I0217 15:09:37.740080 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.841675 master-0 kubenswrapper[8018]: I0217 15:09:37.841603 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.842012 master-0 kubenswrapper[8018]: I0217 15:09:37.841965 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.842204 master-0 kubenswrapper[8018]: E0217 15:09:37.842160 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:37.842290 master-0 kubenswrapper[8018]: E0217 15:09:37.842251 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:38.342227784 +0000 UTC m=+411.094570874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:37.870092 master-0 kubenswrapper[8018]: I0217 15:09:37.870041 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:37.943609 master-0 kubenswrapper[8018]: I0217 15:09:37.943423 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:37.943793 master-0 kubenswrapper[8018]: E0217 15:09:37.943710 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:37.943880 master-0 kubenswrapper[8018]: E0217 15:09:37.943843 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:09:41.943813519 +0000 UTC m=+414.696156599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:38.045638 master-0 kubenswrapper[8018]: I0217 15:09:38.045573 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:38.046358 master-0 kubenswrapper[8018]: E0217 15:09:38.045874 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:38.046580 master-0 kubenswrapper[8018]: E0217 15:09:38.046556 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:40.04652078 +0000 UTC m=+412.798863840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:38.350677 master-0 kubenswrapper[8018]: I0217 15:09:38.350518 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:38.350908 master-0 kubenswrapper[8018]: E0217 15:09:38.350749 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:38.350908 master-0 kubenswrapper[8018]: E0217 15:09:38.350838 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:39.350815521 +0000 UTC m=+412.103158581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:38.656216 master-0 kubenswrapper[8018]: I0217 15:09:38.656116 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw"] Feb 17 15:09:38.658152 master-0 kubenswrapper[8018]: I0217 15:09:38.658099 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.660925 master-0 kubenswrapper[8018]: I0217 15:09:38.660864 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 17 15:09:38.662037 master-0 kubenswrapper[8018]: I0217 15:09:38.661998 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 17 15:09:38.662226 master-0 kubenswrapper[8018]: I0217 15:09:38.662186 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 17 15:09:38.662311 master-0 kubenswrapper[8018]: I0217 15:09:38.662211 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dxkwv" Feb 17 15:09:38.662311 master-0 kubenswrapper[8018]: I0217 15:09:38.662247 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 17 15:09:38.673032 master-0 kubenswrapper[8018]: I0217 15:09:38.672966 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw"] Feb 17 15:09:38.756728 master-0 kubenswrapper[8018]: I0217 15:09:38.756637 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.756969 master-0 kubenswrapper[8018]: I0217 15:09:38.756815 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.756969 master-0 kubenswrapper[8018]: I0217 15:09:38.756894 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.757081 master-0 kubenswrapper[8018]: I0217 15:09:38.757025 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.757137 master-0 kubenswrapper[8018]: I0217 15:09:38.757083 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrmf\" (UniqueName: \"kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.858417 master-0 kubenswrapper[8018]: I0217 15:09:38.858335 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.858632 master-0 kubenswrapper[8018]: I0217 15:09:38.858545 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.858872 master-0 kubenswrapper[8018]: I0217 15:09:38.858830 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.858999 master-0 kubenswrapper[8018]: I0217 15:09:38.858968 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.859264 master-0 kubenswrapper[8018]: I0217 15:09:38.859206 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrmf\" (UniqueName: \"kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.859575 master-0 kubenswrapper[8018]: I0217 15:09:38.859537 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.859747 master-0 kubenswrapper[8018]: I0217 15:09:38.859680 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.862860 master-0 kubenswrapper[8018]: I0217 15:09:38.862766 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.863630 master-0 kubenswrapper[8018]: I0217 15:09:38.863604 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:38.876733 master-0 kubenswrapper[8018]: I0217 15:09:38.876710 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrmf\" (UniqueName: \"kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:39.003905 master-0 kubenswrapper[8018]: I0217 15:09:39.003802 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:09:39.190683 master-0 kubenswrapper[8018]: I0217 15:09:39.190623 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr"] Feb 17 15:09:39.191657 master-0 kubenswrapper[8018]: I0217 15:09:39.191624 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.194121 master-0 kubenswrapper[8018]: I0217 15:09:39.193855 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 17 15:09:39.194121 master-0 kubenswrapper[8018]: I0217 15:09:39.193891 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4h7qp" Feb 17 15:09:39.206684 master-0 kubenswrapper[8018]: I0217 15:09:39.206621 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr"] Feb 17 15:09:39.209012 master-0 kubenswrapper[8018]: I0217 15:09:39.208956 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 17 15:09:39.265258 master-0 kubenswrapper[8018]: I0217 15:09:39.265104 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.265258 master-0 kubenswrapper[8018]: I0217 15:09:39.265185 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.265258 master-0 kubenswrapper[8018]: I0217 15:09:39.265248 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wh2\" (UniqueName: \"kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.366435 master-0 kubenswrapper[8018]: I0217 15:09:39.366335 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.366435 master-0 kubenswrapper[8018]: I0217 15:09:39.366423 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.366736 master-0 kubenswrapper[8018]: E0217 15:09:39.366647 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:39.366736 master-0 kubenswrapper[8018]: I0217 15:09:39.366683 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wh2\" (UniqueName: \"kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.366832 master-0 kubenswrapper[8018]: E0217 15:09:39.366752 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:39.866722723 +0000 UTC m=+412.619065813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:39.366832 master-0 kubenswrapper[8018]: I0217 15:09:39.366798 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:39.367076 master-0 kubenswrapper[8018]: E0217 15:09:39.367028 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:39.367133 master-0 kubenswrapper[8018]: E0217 15:09:39.367105 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:41.367083722 +0000 UTC m=+414.119426772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:39.368598 master-0 kubenswrapper[8018]: I0217 15:09:39.368221 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.395728 master-0 kubenswrapper[8018]: I0217 15:09:39.395652 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wh2\" (UniqueName: \"kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.479354 master-0 kubenswrapper[8018]: I0217 15:09:39.479286 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw"] Feb 17 15:09:39.486008 master-0 kubenswrapper[8018]: W0217 15:09:39.485945 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7307f70e_ee5b_4f81_8155_718a02c9efe7.slice/crio-cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c WatchSource:0}: Error finding container cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c: Status 404 returned error can't find the container with id cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c Feb 17 15:09:39.874361 master-0 kubenswrapper[8018]: I0217 15:09:39.874259 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:39.874660 master-0 kubenswrapper[8018]: E0217 15:09:39.874540 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:39.874660 master-0 kubenswrapper[8018]: E0217 15:09:39.874637 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:40.874613063 +0000 UTC m=+413.626956143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:40.077749 master-0 kubenswrapper[8018]: I0217 15:09:40.077651 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:40.078026 master-0 kubenswrapper[8018]: E0217 15:09:40.077799 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:40.078026 master-0 kubenswrapper[8018]: E0217 15:09:40.077859 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:44.077844783 +0000 UTC m=+416.830187833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:40.131067 master-0 kubenswrapper[8018]: I0217 15:09:40.130896 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c"} Feb 17 15:09:40.304172 master-0 kubenswrapper[8018]: I0217 15:09:40.304071 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95"] Feb 17 15:09:40.311494 master-0 kubenswrapper[8018]: I0217 15:09:40.308292 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.311494 master-0 kubenswrapper[8018]: I0217 15:09:40.309941 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-7hvks" Feb 17 15:09:40.313972 master-0 kubenswrapper[8018]: I0217 15:09:40.313868 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:09:40.319276 master-0 kubenswrapper[8018]: I0217 15:09:40.318963 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5"] Feb 17 15:09:40.320353 master-0 kubenswrapper[8018]: I0217 15:09:40.319763 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.323056 master-0 kubenswrapper[8018]: I0217 15:09:40.323022 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 17 15:09:40.323652 master-0 kubenswrapper[8018]: I0217 15:09:40.323611 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:09:40.323763 master-0 kubenswrapper[8018]: I0217 15:09:40.323742 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:09:40.323904 master-0 kubenswrapper[8018]: I0217 15:09:40.323887 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-4cctd" Feb 17 15:09:40.324037 master-0 kubenswrapper[8018]: I0217 15:09:40.324012 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:09:40.324115 master-0 kubenswrapper[8018]: I0217 15:09:40.324021 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:09:40.326439 master-0 kubenswrapper[8018]: I0217 15:09:40.325411 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-cmbjq"] Feb 17 15:09:40.327118 master-0 kubenswrapper[8018]: I0217 15:09:40.327077 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.331903 master-0 kubenswrapper[8018]: I0217 15:09:40.331848 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95"] Feb 17 15:09:40.331903 master-0 kubenswrapper[8018]: I0217 15:09:40.331893 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-cmbjq"] Feb 17 15:09:40.332344 master-0 kubenswrapper[8018]: I0217 15:09:40.332301 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lgxgp" Feb 17 15:09:40.332566 master-0 kubenswrapper[8018]: I0217 15:09:40.332543 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 17 15:09:40.332812 master-0 kubenswrapper[8018]: I0217 15:09:40.332785 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 17 15:09:40.332870 master-0 kubenswrapper[8018]: I0217 15:09:40.332848 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 17 15:09:40.333356 master-0 kubenswrapper[8018]: I0217 15:09:40.333312 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 17 15:09:40.338280 master-0 kubenswrapper[8018]: I0217 15:09:40.338243 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 17 15:09:40.344922 master-0 kubenswrapper[8018]: I0217 15:09:40.344728 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5"] Feb 17 15:09:40.382555 master-0 kubenswrapper[8018]: I0217 15:09:40.382450 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.382555 master-0 kubenswrapper[8018]: I0217 15:09:40.382530 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnxm\" (UniqueName: \"kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.382555 master-0 kubenswrapper[8018]: I0217 15:09:40.382557 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382580 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382603 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382637 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5jv\" (UniqueName: \"kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382655 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382675 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382700 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382721 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsd7\" (UniqueName: \"kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.382765 master-0 kubenswrapper[8018]: I0217 15:09:40.382742 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.483982 master-0 kubenswrapper[8018]: I0217 15:09:40.483910 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnxm\" (UniqueName: \"kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.483982 master-0 kubenswrapper[8018]: I0217 15:09:40.483971 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.483982 master-0 kubenswrapper[8018]: I0217 15:09:40.483997 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.484445 master-0 kubenswrapper[8018]: I0217 15:09:40.484389 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.484640 master-0 kubenswrapper[8018]: I0217 15:09:40.484601 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5jv\" (UniqueName: \"kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.484720 master-0 kubenswrapper[8018]: I0217 15:09:40.484698 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.484793 master-0 kubenswrapper[8018]: I0217 15:09:40.484770 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.485322 master-0 kubenswrapper[8018]: I0217 15:09:40.485276 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.485597 master-0 kubenswrapper[8018]: I0217 15:09:40.485528 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsd7\" (UniqueName: \"kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.485735 master-0 kubenswrapper[8018]: I0217 15:09:40.485705 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.485792 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.485832 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.485882 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.486098 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.486287 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.486483 master-0 kubenswrapper[8018]: I0217 15:09:40.486418 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.488605 master-0 kubenswrapper[8018]: I0217 15:09:40.488578 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.495367 master-0 kubenswrapper[8018]: I0217 15:09:40.495311 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.499469 master-0 kubenswrapper[8018]: I0217 15:09:40.498254 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.506527 master-0 kubenswrapper[8018]: I0217 15:09:40.505388 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5jv\" (UniqueName: \"kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.507839 master-0 kubenswrapper[8018]: I0217 15:09:40.507784 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnxm\" (UniqueName: \"kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.515232 master-0 kubenswrapper[8018]: I0217 15:09:40.515192 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsd7\" (UniqueName: \"kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.650529 master-0 kubenswrapper[8018]: I0217 15:09:40.649471 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:09:40.670662 master-0 kubenswrapper[8018]: I0217 15:09:40.670593 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:09:40.689296 master-0 kubenswrapper[8018]: I0217 15:09:40.689237 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:09:40.890335 master-0 kubenswrapper[8018]: I0217 15:09:40.889628 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:40.890335 master-0 kubenswrapper[8018]: E0217 15:09:40.889808 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:40.890335 master-0 kubenswrapper[8018]: E0217 15:09:40.889904 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:42.88987911 +0000 UTC m=+415.642222160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:41.227263 master-0 kubenswrapper[8018]: I0217 15:09:41.227206 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95"] Feb 17 15:09:41.232337 master-0 kubenswrapper[8018]: I0217 15:09:41.231379 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-cmbjq"] Feb 17 15:09:41.233616 master-0 kubenswrapper[8018]: I0217 15:09:41.233563 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5"] Feb 17 15:09:41.395809 master-0 kubenswrapper[8018]: I0217 15:09:41.395756 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:41.396532 master-0 kubenswrapper[8018]: E0217 15:09:41.395921 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:41.396532 master-0 kubenswrapper[8018]: E0217 15:09:41.395994 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:45.395980815 +0000 UTC m=+418.148323865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:41.431166 master-0 kubenswrapper[8018]: W0217 15:09:41.431100 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda06cfcb_7c78_4022_96b1_d858853f5adc.slice/crio-086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d WatchSource:0}: Error finding container 086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d: Status 404 returned error can't find the container with id 086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d Feb 17 15:09:41.431962 master-0 kubenswrapper[8018]: W0217 15:09:41.431915 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8385a176_0e12_47ef_862e_8331e6734b9c.slice/crio-7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de WatchSource:0}: Error finding container 7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de: Status 404 returned error can't find the container with id 7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de Feb 17 15:09:41.437007 master-0 kubenswrapper[8018]: W0217 15:09:41.436121 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad81b5bd_2f97_4e7e_a12b_746998fa59f2.slice/crio-bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be WatchSource:0}: Error finding container bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be: Status 404 returned error can't find the container with id bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be Feb 17 15:09:41.464810 master-0 kubenswrapper[8018]: I0217 15:09:41.463593 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25"] Feb 17 15:09:41.464810 master-0 kubenswrapper[8018]: I0217 15:09:41.464384 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.467333 master-0 kubenswrapper[8018]: I0217 15:09:41.467235 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:09:41.467835 master-0 kubenswrapper[8018]: I0217 15:09:41.467795 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-tphvr" Feb 17 15:09:41.476361 master-0 kubenswrapper[8018]: I0217 15:09:41.475900 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25"] Feb 17 15:09:41.497824 master-0 kubenswrapper[8018]: I0217 15:09:41.497750 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.498074 master-0 kubenswrapper[8018]: I0217 15:09:41.497870 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.498074 master-0 kubenswrapper[8018]: I0217 15:09:41.497916 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.498074 master-0 kubenswrapper[8018]: I0217 15:09:41.497938 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2d4n\" (UniqueName: \"kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.569271 master-0 kubenswrapper[8018]: I0217 15:09:41.569198 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd"] Feb 17 15:09:41.571401 master-0 kubenswrapper[8018]: I0217 15:09:41.571353 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.573223 master-0 kubenswrapper[8018]: I0217 15:09:41.573164 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 17 15:09:41.574289 master-0 kubenswrapper[8018]: I0217 15:09:41.574166 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:09:41.574289 master-0 kubenswrapper[8018]: I0217 15:09:41.574198 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dkdg8" Feb 17 15:09:41.574387 master-0 kubenswrapper[8018]: I0217 15:09:41.574307 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 17 15:09:41.574420 master-0 kubenswrapper[8018]: I0217 15:09:41.574408 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:09:41.575242 master-0 kubenswrapper[8018]: I0217 15:09:41.575190 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 17 15:09:41.599739 master-0 kubenswrapper[8018]: I0217 15:09:41.599644 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599777 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599811 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599841 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599866 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599899 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599921 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2d4n\" (UniqueName: \"kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.599985 master-0 kubenswrapper[8018]: I0217 15:09:41.599978 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.600448 master-0 kubenswrapper[8018]: I0217 15:09:41.600008 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqwz\" (UniqueName: \"kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.601722 master-0 kubenswrapper[8018]: I0217 15:09:41.601674 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.607182 master-0 kubenswrapper[8018]: I0217 15:09:41.607139 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.609495 master-0 kubenswrapper[8018]: I0217 15:09:41.609425 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.620701 master-0 kubenswrapper[8018]: I0217 15:09:41.620610 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2d4n\" (UniqueName: \"kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.702073 master-0 kubenswrapper[8018]: I0217 15:09:41.702022 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.702073 master-0 kubenswrapper[8018]: I0217 15:09:41.702076 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.702320 master-0 kubenswrapper[8018]: I0217 15:09:41.702101 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.702320 master-0 kubenswrapper[8018]: I0217 15:09:41.702154 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.702320 master-0 kubenswrapper[8018]: I0217 15:09:41.702178 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqwz\" (UniqueName: \"kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.703892 master-0 kubenswrapper[8018]: I0217 15:09:41.702716 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.703892 master-0 kubenswrapper[8018]: I0217 15:09:41.702832 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.703892 master-0 kubenswrapper[8018]: I0217 15:09:41.703205 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.717593 master-0 kubenswrapper[8018]: I0217 15:09:41.706173 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.727643 master-0 kubenswrapper[8018]: I0217 15:09:41.726641 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqwz\" (UniqueName: \"kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.788859 master-0 kubenswrapper[8018]: I0217 15:09:41.788800 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:41.949074 master-0 kubenswrapper[8018]: I0217 15:09:41.949012 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:09:41.974548 master-0 kubenswrapper[8018]: W0217 15:09:41.974429 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317bc9db_ab82_4df1_81da_1a091f88acb1.slice/crio-39a89d3fb0a0f6c3684ee19a728b5493a8d5502ad64785157e799cf5d06dece2 WatchSource:0}: Error finding container 39a89d3fb0a0f6c3684ee19a728b5493a8d5502ad64785157e799cf5d06dece2: Status 404 returned error can't find the container with id 39a89d3fb0a0f6c3684ee19a728b5493a8d5502ad64785157e799cf5d06dece2 Feb 17 15:09:42.002217 master-0 kubenswrapper[8018]: I0217 15:09:42.001870 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz"] Feb 17 15:09:42.008311 master-0 kubenswrapper[8018]: I0217 15:09:42.008260 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:42.008548 master-0 kubenswrapper[8018]: E0217 15:09:42.008484 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:42.008632 master-0 kubenswrapper[8018]: E0217 15:09:42.008612 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:09:50.008584746 +0000 UTC m=+422.760927796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:42.009608 master-0 kubenswrapper[8018]: I0217 15:09:42.009575 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.017405 master-0 kubenswrapper[8018]: I0217 15:09:42.017076 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:09:42.018336 master-0 kubenswrapper[8018]: I0217 15:09:42.018289 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:09:42.018688 master-0 kubenswrapper[8018]: I0217 15:09:42.018563 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t9g75" Feb 17 15:09:42.018982 master-0 kubenswrapper[8018]: I0217 15:09:42.018780 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:09:42.022628 master-0 kubenswrapper[8018]: I0217 15:09:42.020591 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz"] Feb 17 15:09:42.147550 master-0 kubenswrapper[8018]: I0217 15:09:42.108922 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.147550 master-0 kubenswrapper[8018]: I0217 15:09:42.109017 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.147550 master-0 kubenswrapper[8018]: I0217 15:09:42.109050 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf69t\" (UniqueName: \"kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.147550 master-0 kubenswrapper[8018]: I0217 15:09:42.109110 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.170344 master-0 kubenswrapper[8018]: I0217 15:09:42.169922 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"589a9baac25edbe970df19405e4a8389807662e75268c85829dcdb27fddff9d5"} Feb 17 15:09:42.170344 master-0 kubenswrapper[8018]: I0217 15:09:42.170072 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099"} Feb 17 15:09:42.173176 master-0 kubenswrapper[8018]: I0217 15:09:42.173120 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" event={"ID":"da06cfcb-7c78-4022-96b1-d858853f5adc","Type":"ContainerStarted","Data":"8134b130259326e7351c74de60e5bed58362f1d72cd7ba015e97f22eb8495ac4"} Feb 17 15:09:42.173261 master-0 kubenswrapper[8018]: I0217 15:09:42.173239 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" event={"ID":"da06cfcb-7c78-4022-96b1-d858853f5adc","Type":"ContainerStarted","Data":"d6df48814b566ca92cfa0739d561cf9daa945b55707b972a933430e336c6c185"} Feb 17 15:09:42.173307 master-0 kubenswrapper[8018]: I0217 15:09:42.173264 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" event={"ID":"da06cfcb-7c78-4022-96b1-d858853f5adc","Type":"ContainerStarted","Data":"086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d"} Feb 17 15:09:42.177438 master-0 kubenswrapper[8018]: I0217 15:09:42.177405 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerStarted","Data":"bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be"} Feb 17 15:09:42.179681 master-0 kubenswrapper[8018]: I0217 15:09:42.179645 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerStarted","Data":"39a89d3fb0a0f6c3684ee19a728b5493a8d5502ad64785157e799cf5d06dece2"} Feb 17 15:09:42.182603 master-0 kubenswrapper[8018]: I0217 15:09:42.181533 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" event={"ID":"8385a176-0e12-47ef-862e-8331e6734b9c","Type":"ContainerStarted","Data":"7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de"} Feb 17 15:09:42.201247 master-0 kubenswrapper[8018]: I0217 15:09:42.201128 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" podStartSLOduration=2.173375187 podStartE2EDuration="4.201099791s" podCreationTimestamp="2026-02-17 15:09:38 +0000 UTC" firstStartedPulling="2026-02-17 15:09:39.488321019 +0000 UTC m=+412.240664079" lastFinishedPulling="2026-02-17 15:09:41.516045633 +0000 UTC m=+414.268388683" observedRunningTime="2026-02-17 15:09:42.199019651 +0000 UTC m=+414.951362731" watchObservedRunningTime="2026-02-17 15:09:42.201099791 +0000 UTC m=+414.953442841" Feb 17 15:09:42.213864 master-0 kubenswrapper[8018]: I0217 15:09:42.213767 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.214361 master-0 kubenswrapper[8018]: I0217 15:09:42.213956 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.214361 master-0 kubenswrapper[8018]: I0217 15:09:42.214022 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf69t\" (UniqueName: \"kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.214361 master-0 kubenswrapper[8018]: I0217 15:09:42.214237 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.218860 master-0 kubenswrapper[8018]: I0217 15:09:42.215773 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.218860 master-0 kubenswrapper[8018]: E0217 15:09:42.215874 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:42.218860 master-0 kubenswrapper[8018]: E0217 15:09:42.215928 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:42.715914095 +0000 UTC m=+415.468257145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:42.219351 master-0 kubenswrapper[8018]: I0217 15:09:42.219317 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.224057 master-0 kubenswrapper[8018]: I0217 15:09:42.224021 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25"] Feb 17 15:09:42.230224 master-0 kubenswrapper[8018]: I0217 15:09:42.230093 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" podStartSLOduration=2.230062243 podStartE2EDuration="2.230062243s" podCreationTimestamp="2026-02-17 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:42.220836466 +0000 UTC m=+414.973179526" watchObservedRunningTime="2026-02-17 15:09:42.230062243 +0000 UTC m=+414.982405313" Feb 17 15:09:42.231502 master-0 kubenswrapper[8018]: W0217 15:09:42.231358 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb58e9d93_7683_440d_a603_9543e5455490.slice/crio-564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f WatchSource:0}: Error finding container 564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f: Status 404 returned error can't find the container with id 564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f Feb 17 15:09:42.246424 master-0 kubenswrapper[8018]: I0217 15:09:42.246394 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf69t\" (UniqueName: \"kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.720877 master-0 kubenswrapper[8018]: I0217 15:09:42.720352 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:42.720877 master-0 kubenswrapper[8018]: E0217 15:09:42.720530 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:42.720877 master-0 kubenswrapper[8018]: E0217 15:09:42.720587 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:43.720570046 +0000 UTC m=+416.472913096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:42.922935 master-0 kubenswrapper[8018]: I0217 15:09:42.922407 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:42.922935 master-0 kubenswrapper[8018]: E0217 15:09:42.922567 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:42.922935 master-0 kubenswrapper[8018]: E0217 15:09:42.922614 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:46.922599177 +0000 UTC m=+419.674942217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:43.194828 master-0 kubenswrapper[8018]: I0217 15:09:43.194721 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" event={"ID":"b58e9d93-7683-440d-a603-9543e5455490","Type":"ContainerStarted","Data":"34652e86179121b35f6e8007b7f018ae32f8a976bd2da02f004cedf4c5b0c19b"} Feb 17 15:09:43.195020 master-0 kubenswrapper[8018]: I0217 15:09:43.194903 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" event={"ID":"b58e9d93-7683-440d-a603-9543e5455490","Type":"ContainerStarted","Data":"564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f"} Feb 17 15:09:43.196134 master-0 kubenswrapper[8018]: I0217 15:09:43.196084 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:43.201225 master-0 kubenswrapper[8018]: I0217 15:09:43.201185 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:09:43.217332 master-0 kubenswrapper[8018]: I0217 15:09:43.217212 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" podStartSLOduration=2.2171899489999998 podStartE2EDuration="2.217189949s" podCreationTimestamp="2026-02-17 15:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:43.214579625 +0000 UTC m=+415.966922695" watchObservedRunningTime="2026-02-17 15:09:43.217189949 +0000 UTC m=+415.969532999" Feb 17 15:09:43.732389 master-0 kubenswrapper[8018]: I0217 15:09:43.732299 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:43.732901 master-0 kubenswrapper[8018]: E0217 15:09:43.732583 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:43.732901 master-0 kubenswrapper[8018]: E0217 15:09:43.732686 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:45.732664585 +0000 UTC m=+418.485007645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:44.137583 master-0 kubenswrapper[8018]: I0217 15:09:44.137520 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:44.137863 master-0 kubenswrapper[8018]: E0217 15:09:44.137729 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:44.137863 master-0 kubenswrapper[8018]: E0217 15:09:44.137857 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:52.137828762 +0000 UTC m=+424.890171822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:44.201404 master-0 kubenswrapper[8018]: I0217 15:09:44.200486 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" event={"ID":"8385a176-0e12-47ef-862e-8331e6734b9c","Type":"ContainerStarted","Data":"4adf8d0f12db14b67c44e524b550b78d1fa8f334eecf810d58480ad559d615cc"} Feb 17 15:09:44.227711 master-0 kubenswrapper[8018]: I0217 15:09:44.227637 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" podStartSLOduration=1.7737414999999999 podStartE2EDuration="4.227619417s" podCreationTimestamp="2026-02-17 15:09:40 +0000 UTC" firstStartedPulling="2026-02-17 15:09:41.436155091 +0000 UTC m=+414.188498141" lastFinishedPulling="2026-02-17 15:09:43.890033018 +0000 UTC m=+416.642376058" observedRunningTime="2026-02-17 15:09:44.227260157 +0000 UTC m=+416.979603207" watchObservedRunningTime="2026-02-17 15:09:44.227619417 +0000 UTC m=+416.979962467" Feb 17 15:09:45.204069 master-0 kubenswrapper[8018]: I0217 15:09:45.203985 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-r6sfp"] Feb 17 15:09:45.204999 master-0 kubenswrapper[8018]: I0217 15:09:45.204803 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.209035 master-0 kubenswrapper[8018]: I0217 15:09:45.208665 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6c645" Feb 17 15:09:45.212012 master-0 kubenswrapper[8018]: I0217 15:09:45.211434 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:09:45.269614 master-0 kubenswrapper[8018]: I0217 15:09:45.269540 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.269614 master-0 kubenswrapper[8018]: I0217 15:09:45.269614 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.270071 master-0 kubenswrapper[8018]: I0217 15:09:45.270011 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2mb\" (UniqueName: \"kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.270793 master-0 kubenswrapper[8018]: I0217 15:09:45.270471 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.372979 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.373068 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.373101 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.373150 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2mb\" (UniqueName: \"kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.374394 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.374717 master-0 kubenswrapper[8018]: I0217 15:09:45.374444 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.386180 master-0 kubenswrapper[8018]: I0217 15:09:45.385162 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.395904 master-0 kubenswrapper[8018]: I0217 15:09:45.393528 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2mb\" (UniqueName: \"kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.476562 master-0 kubenswrapper[8018]: I0217 15:09:45.474821 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:45.476562 master-0 kubenswrapper[8018]: E0217 15:09:45.475283 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:45.480833 master-0 kubenswrapper[8018]: E0217 15:09:45.476589 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:53.47654542 +0000 UTC m=+426.228888470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:45.524055 master-0 kubenswrapper[8018]: I0217 15:09:45.523985 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:09:45.781887 master-0 kubenswrapper[8018]: I0217 15:09:45.781760 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:45.782060 master-0 kubenswrapper[8018]: E0217 15:09:45.781954 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:45.782060 master-0 kubenswrapper[8018]: E0217 15:09:45.782039 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:49.78201907 +0000 UTC m=+422.534362110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:47.011308 master-0 kubenswrapper[8018]: I0217 15:09:47.011204 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:47.012339 master-0 kubenswrapper[8018]: E0217 15:09:47.011440 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:47.012339 master-0 kubenswrapper[8018]: E0217 15:09:47.011560 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:55.011535377 +0000 UTC m=+427.763878437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:47.616635 master-0 kubenswrapper[8018]: W0217 15:09:47.616554 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2102e834_2b36_49de_a99e_c2dbe64d722f.slice/crio-c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2 WatchSource:0}: Error finding container c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2: Status 404 returned error can't find the container with id c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2 Feb 17 15:09:48.251541 master-0 kubenswrapper[8018]: I0217 15:09:48.251480 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" event={"ID":"2102e834-2b36-49de-a99e-c2dbe64d722f","Type":"ContainerStarted","Data":"ff7893f4659c11b793a1cc6f6978dad20b5b640428412b9d0cb2b925171451e2"} Feb 17 15:09:48.252042 master-0 kubenswrapper[8018]: I0217 15:09:48.251555 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" event={"ID":"2102e834-2b36-49de-a99e-c2dbe64d722f","Type":"ContainerStarted","Data":"e5963d9c2c83243ba2ad019f306ec4a5ac2720a57a33853c3687d6644199ed3f"} Feb 17 15:09:48.252042 master-0 kubenswrapper[8018]: I0217 15:09:48.251571 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" event={"ID":"2102e834-2b36-49de-a99e-c2dbe64d722f","Type":"ContainerStarted","Data":"c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2"} Feb 17 15:09:48.254105 master-0 kubenswrapper[8018]: I0217 15:09:48.254055 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerStarted","Data":"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082"} Feb 17 15:09:48.256259 master-0 kubenswrapper[8018]: I0217 15:09:48.256200 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerStarted","Data":"1ac9a237c052e7fcf84aea4376a51f8bc274e44722f869b5fc32cf99dd2e4eac"} Feb 17 15:09:48.275664 master-0 kubenswrapper[8018]: I0217 15:09:48.275598 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" podStartSLOduration=3.2755805909999998 podStartE2EDuration="3.275580591s" podCreationTimestamp="2026-02-17 15:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:48.274104385 +0000 UTC m=+421.026447435" watchObservedRunningTime="2026-02-17 15:09:48.275580591 +0000 UTC m=+421.027923651" Feb 17 15:09:48.311345 master-0 kubenswrapper[8018]: I0217 15:09:48.310850 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" podStartSLOduration=2.142463612 podStartE2EDuration="8.310826816s" podCreationTimestamp="2026-02-17 15:09:40 +0000 UTC" firstStartedPulling="2026-02-17 15:09:41.438008827 +0000 UTC m=+414.190351877" lastFinishedPulling="2026-02-17 15:09:47.606372011 +0000 UTC m=+420.358715081" observedRunningTime="2026-02-17 15:09:48.308044678 +0000 UTC m=+421.060387728" watchObservedRunningTime="2026-02-17 15:09:48.310826816 +0000 UTC m=+421.063169876" Feb 17 15:09:48.581315 master-0 kubenswrapper[8018]: I0217 15:09:48.581251 8018 scope.go:117] "RemoveContainer" containerID="60a357860a4bf6848914cb16ba4e2389f439f69e27bc7ca67dd28f0f1be9934b" Feb 17 15:09:49.268441 master-0 kubenswrapper[8018]: I0217 15:09:49.268362 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/0.log" Feb 17 15:09:49.270211 master-0 kubenswrapper[8018]: I0217 15:09:49.270088 8018 generic.go:334] "Generic (PLEG): container finished" podID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerID="b33c49c6d171a6bfe1815b6e28253462b31734d4dc0f49959ba10c48fc0143f0" exitCode=1 Feb 17 15:09:49.270533 master-0 kubenswrapper[8018]: I0217 15:09:49.270446 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerDied","Data":"b33c49c6d171a6bfe1815b6e28253462b31734d4dc0f49959ba10c48fc0143f0"} Feb 17 15:09:49.270721 master-0 kubenswrapper[8018]: I0217 15:09:49.270694 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerStarted","Data":"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195"} Feb 17 15:09:49.271207 master-0 kubenswrapper[8018]: I0217 15:09:49.271134 8018 scope.go:117] "RemoveContainer" containerID="b33c49c6d171a6bfe1815b6e28253462b31734d4dc0f49959ba10c48fc0143f0" Feb 17 15:09:49.860572 master-0 kubenswrapper[8018]: I0217 15:09:49.860386 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:49.860820 master-0 kubenswrapper[8018]: E0217 15:09:49.860719 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:49.860901 master-0 kubenswrapper[8018]: E0217 15:09:49.860873 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:09:57.860824251 +0000 UTC m=+430.613167321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:50.063655 master-0 kubenswrapper[8018]: I0217 15:09:50.063552 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") pod \"machine-approver-6c46d95f74-nsmfx\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:50.064047 master-0 kubenswrapper[8018]: E0217 15:09:50.063993 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:50.064175 master-0 kubenswrapper[8018]: E0217 15:09:50.064117 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls podName:f0c5ca70-1706-4858-adcb-b421ba1e422b nodeName:}" failed. No retries permitted until 2026-02-17 15:10:06.064090182 +0000 UTC m=+438.816433272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls") pod "machine-approver-6c46d95f74-nsmfx" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b") : secret "machine-approver-tls" not found Feb 17 15:09:50.279793 master-0 kubenswrapper[8018]: I0217 15:09:50.279712 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/1.log" Feb 17 15:09:50.280873 master-0 kubenswrapper[8018]: I0217 15:09:50.280812 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/0.log" Feb 17 15:09:50.281948 master-0 kubenswrapper[8018]: I0217 15:09:50.281881 8018 generic.go:334] "Generic (PLEG): container finished" podID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" exitCode=1 Feb 17 15:09:50.282062 master-0 kubenswrapper[8018]: I0217 15:09:50.281950 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerDied","Data":"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921"} Feb 17 15:09:50.282062 master-0 kubenswrapper[8018]: I0217 15:09:50.282022 8018 scope.go:117] "RemoveContainer" containerID="b33c49c6d171a6bfe1815b6e28253462b31734d4dc0f49959ba10c48fc0143f0" Feb 17 15:09:50.282751 master-0 kubenswrapper[8018]: I0217 15:09:50.282677 8018 scope.go:117] "RemoveContainer" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:09:50.283058 master-0 kubenswrapper[8018]: E0217 15:09:50.283008 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_openshift-cloud-controller-manager-operator(317bc9db-ab82-4df1-81da-1a091f88acb1)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" Feb 17 15:09:51.291813 master-0 kubenswrapper[8018]: I0217 15:09:51.291731 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/1.log" Feb 17 15:09:51.293326 master-0 kubenswrapper[8018]: I0217 15:09:51.293277 8018 scope.go:117] "RemoveContainer" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:09:51.293604 master-0 kubenswrapper[8018]: E0217 15:09:51.293544 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_openshift-cloud-controller-manager-operator(317bc9db-ab82-4df1-81da-1a091f88acb1)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" Feb 17 15:09:51.460653 master-0 kubenswrapper[8018]: I0217 15:09:51.460575 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f"] Feb 17 15:09:51.462423 master-0 kubenswrapper[8018]: I0217 15:09:51.462352 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.465264 master-0 kubenswrapper[8018]: I0217 15:09:51.465181 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mv24c" Feb 17 15:09:51.465549 master-0 kubenswrapper[8018]: I0217 15:09:51.465452 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:09:51.486011 master-0 kubenswrapper[8018]: I0217 15:09:51.485040 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f"] Feb 17 15:09:51.495611 master-0 kubenswrapper[8018]: I0217 15:09:51.493863 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.495611 master-0 kubenswrapper[8018]: I0217 15:09:51.494001 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pn82\" (UniqueName: \"kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.495611 master-0 kubenswrapper[8018]: I0217 15:09:51.494108 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.595229 master-0 kubenswrapper[8018]: I0217 15:09:51.595099 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.595620 master-0 kubenswrapper[8018]: I0217 15:09:51.595588 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.595845 master-0 kubenswrapper[8018]: I0217 15:09:51.595807 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pn82\" (UniqueName: \"kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.597013 master-0 kubenswrapper[8018]: I0217 15:09:51.596956 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.601285 master-0 kubenswrapper[8018]: I0217 15:09:51.601163 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.616072 master-0 kubenswrapper[8018]: I0217 15:09:51.615998 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pn82\" (UniqueName: \"kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:51.805551 master-0 kubenswrapper[8018]: I0217 15:09:51.804865 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:09:52.206148 master-0 kubenswrapper[8018]: I0217 15:09:52.205929 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:09:52.206148 master-0 kubenswrapper[8018]: E0217 15:09:52.206136 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:52.206528 master-0 kubenswrapper[8018]: E0217 15:09:52.206242 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:08.206215395 +0000 UTC m=+440.958558485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:09:52.264674 master-0 kubenswrapper[8018]: I0217 15:09:52.264606 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f"] Feb 17 15:09:52.277731 master-0 kubenswrapper[8018]: W0217 15:09:52.277676 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba1306f7_029b_4d43_ba3c_5738da9148d6.slice/crio-a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447 WatchSource:0}: Error finding container a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447: Status 404 returned error can't find the container with id a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447 Feb 17 15:09:52.300558 master-0 kubenswrapper[8018]: I0217 15:09:52.300505 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" event={"ID":"ba1306f7-029b-4d43-ba3c-5738da9148d6","Type":"ContainerStarted","Data":"a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447"} Feb 17 15:09:52.649317 master-0 kubenswrapper[8018]: I0217 15:09:52.649246 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-864ddd5f56-g8w2f"] Feb 17 15:09:52.650154 master-0 kubenswrapper[8018]: I0217 15:09:52.650123 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.652096 master-0 kubenswrapper[8018]: I0217 15:09:52.652034 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:09:52.652886 master-0 kubenswrapper[8018]: I0217 15:09:52.652830 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:09:52.653237 master-0 kubenswrapper[8018]: I0217 15:09:52.653192 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:09:52.653550 master-0 kubenswrapper[8018]: I0217 15:09:52.653493 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:09:52.654122 master-0 kubenswrapper[8018]: I0217 15:09:52.654083 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:09:52.654439 master-0 kubenswrapper[8018]: I0217 15:09:52.654395 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:09:52.657267 master-0 kubenswrapper[8018]: I0217 15:09:52.657207 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs"] Feb 17 15:09:52.658785 master-0 kubenswrapper[8018]: I0217 15:09:52.658737 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:52.661098 master-0 kubenswrapper[8018]: I0217 15:09:52.661056 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 15:09:52.662940 master-0 kubenswrapper[8018]: I0217 15:09:52.662899 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7"] Feb 17 15:09:52.663873 master-0 kubenswrapper[8018]: I0217 15:09:52.663837 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:09:52.672720 master-0 kubenswrapper[8018]: I0217 15:09:52.672589 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h"] Feb 17 15:09:52.673490 master-0 kubenswrapper[8018]: I0217 15:09:52.673416 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.676763 master-0 kubenswrapper[8018]: I0217 15:09:52.676718 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:09:52.677496 master-0 kubenswrapper[8018]: I0217 15:09:52.677451 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7"] Feb 17 15:09:52.683730 master-0 kubenswrapper[8018]: I0217 15:09:52.683666 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs"] Feb 17 15:09:52.687678 master-0 kubenswrapper[8018]: I0217 15:09:52.686063 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h"] Feb 17 15:09:52.713688 master-0 kubenswrapper[8018]: I0217 15:09:52.713635 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.713787 master-0 kubenswrapper[8018]: I0217 15:09:52.713753 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb9r\" (UniqueName: \"kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r\") pod \"network-check-source-7d8f4c8c66-fc8n7\" (UID: \"d973c9bc-8097-489c-9b8b-70b775177c41\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:09:52.713787 master-0 kubenswrapper[8018]: I0217 15:09:52.713778 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.713870 master-0 kubenswrapper[8018]: I0217 15:09:52.713795 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.713870 master-0 kubenswrapper[8018]: I0217 15:09:52.713836 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.713870 master-0 kubenswrapper[8018]: I0217 15:09:52.713859 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.713983 master-0 kubenswrapper[8018]: I0217 15:09:52.713877 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.713983 master-0 kubenswrapper[8018]: I0217 15:09:52.713916 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:52.713983 master-0 kubenswrapper[8018]: I0217 15:09:52.713935 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6b7\" (UniqueName: \"kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.714073 master-0 kubenswrapper[8018]: I0217 15:09:52.714008 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8jf\" (UniqueName: \"kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.815396 master-0 kubenswrapper[8018]: I0217 15:09:52.815343 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:52.815396 master-0 kubenswrapper[8018]: I0217 15:09:52.815389 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6b7\" (UniqueName: \"kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.815790 master-0 kubenswrapper[8018]: I0217 15:09:52.815511 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8jf\" (UniqueName: \"kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.815790 master-0 kubenswrapper[8018]: I0217 15:09:52.815672 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.815967 master-0 kubenswrapper[8018]: I0217 15:09:52.815937 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.816070 master-0 kubenswrapper[8018]: I0217 15:09:52.815970 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb9r\" (UniqueName: \"kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r\") pod \"network-check-source-7d8f4c8c66-fc8n7\" (UID: \"d973c9bc-8097-489c-9b8b-70b775177c41\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:09:52.816070 master-0 kubenswrapper[8018]: I0217 15:09:52.815994 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.816349 master-0 kubenswrapper[8018]: I0217 15:09:52.816321 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.816478 master-0 kubenswrapper[8018]: I0217 15:09:52.816361 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.816478 master-0 kubenswrapper[8018]: I0217 15:09:52.816387 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.817163 master-0 kubenswrapper[8018]: I0217 15:09:52.817123 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.817351 master-0 kubenswrapper[8018]: I0217 15:09:52.817315 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.819413 master-0 kubenswrapper[8018]: I0217 15:09:52.819365 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.819702 master-0 kubenswrapper[8018]: I0217 15:09:52.819663 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:52.820032 master-0 kubenswrapper[8018]: I0217 15:09:52.819997 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.823552 master-0 kubenswrapper[8018]: I0217 15:09:52.823446 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.829010 master-0 kubenswrapper[8018]: I0217 15:09:52.828074 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.831480 master-0 kubenswrapper[8018]: I0217 15:09:52.831439 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8jf\" (UniqueName: \"kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:52.833392 master-0 kubenswrapper[8018]: I0217 15:09:52.833355 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6b7\" (UniqueName: \"kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7\") pod \"collect-profiles-29522340-8cp6h\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:52.837628 master-0 kubenswrapper[8018]: I0217 15:09:52.837602 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb9r\" (UniqueName: \"kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r\") pod \"network-check-source-7d8f4c8c66-fc8n7\" (UID: \"d973c9bc-8097-489c-9b8b-70b775177c41\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:09:52.979569 master-0 kubenswrapper[8018]: I0217 15:09:52.979407 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:09:53.007907 master-0 kubenswrapper[8018]: I0217 15:09:53.007537 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:53.029487 master-0 kubenswrapper[8018]: I0217 15:09:53.029408 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:09:53.041639 master-0 kubenswrapper[8018]: I0217 15:09:53.041524 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:53.313573 master-0 kubenswrapper[8018]: I0217 15:09:53.313220 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" event={"ID":"ba1306f7-029b-4d43-ba3c-5738da9148d6","Type":"ContainerStarted","Data":"8a7a2501fe95f1ce2d8f8dabeac7a893ee66d822ec7132c4a212e26fd4452db8"} Feb 17 15:09:53.323136 master-0 kubenswrapper[8018]: I0217 15:09:53.323040 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" event={"ID":"ba1306f7-029b-4d43-ba3c-5738da9148d6","Type":"ContainerStarted","Data":"4ca2a1481cf68af809d23ae9ad2e79b63336d3be01516204a6730a744e080f72"} Feb 17 15:09:53.325679 master-0 kubenswrapper[8018]: I0217 15:09:53.325602 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" event={"ID":"a2d6e329-7ad8-4fc2-accc-66827f11743d","Type":"ContainerStarted","Data":"63333766efa7717806a0ceafcfe5e910596ee1f9959715b67862349cd0661743"} Feb 17 15:09:53.360896 master-0 kubenswrapper[8018]: I0217 15:09:53.360800 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" podStartSLOduration=2.360779462 podStartE2EDuration="2.360779462s" podCreationTimestamp="2026-02-17 15:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:53.353659907 +0000 UTC m=+426.106002987" watchObservedRunningTime="2026-02-17 15:09:53.360779462 +0000 UTC m=+426.113122522" Feb 17 15:09:53.482731 master-0 kubenswrapper[8018]: I0217 15:09:53.481555 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7"] Feb 17 15:09:53.487883 master-0 kubenswrapper[8018]: W0217 15:09:53.486780 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd973c9bc_8097_489c_9b8b_70b775177c41.slice/crio-7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9 WatchSource:0}: Error finding container 7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9: Status 404 returned error can't find the container with id 7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9 Feb 17 15:09:53.502133 master-0 kubenswrapper[8018]: I0217 15:09:53.502092 8018 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:09:53.529145 master-0 kubenswrapper[8018]: I0217 15:09:53.528987 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:09:53.529358 master-0 kubenswrapper[8018]: E0217 15:09:53.529231 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:09:53.529474 master-0 kubenswrapper[8018]: E0217 15:09:53.529339 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:09.529313259 +0000 UTC m=+442.281656399 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:09:53.547811 master-0 kubenswrapper[8018]: I0217 15:09:53.545559 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs"] Feb 17 15:09:53.561862 master-0 kubenswrapper[8018]: W0217 15:09:53.561021 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd075439c_721d_432b_b4f9_9f078132bf92.slice/crio-93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9 WatchSource:0}: Error finding container 93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9: Status 404 returned error can't find the container with id 93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9 Feb 17 15:09:53.597877 master-0 kubenswrapper[8018]: I0217 15:09:53.597816 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h"] Feb 17 15:09:53.610421 master-0 kubenswrapper[8018]: W0217 15:09:53.609682 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a162205_f111_49b4_9f46_0b40b6184336.slice/crio-52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76 WatchSource:0}: Error finding container 52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76: Status 404 returned error can't find the container with id 52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76 Feb 17 15:09:53.791650 master-0 kubenswrapper[8018]: I0217 15:09:53.791600 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:09:53.994392 master-0 kubenswrapper[8018]: I0217 15:09:53.994355 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/3.log" Feb 17 15:09:54.333617 master-0 kubenswrapper[8018]: I0217 15:09:54.333558 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" event={"ID":"d973c9bc-8097-489c-9b8b-70b775177c41","Type":"ContainerStarted","Data":"0710c687387e272a896614134229aafb264f3716fed5a57a7f52961c9eac3234"} Feb 17 15:09:54.334160 master-0 kubenswrapper[8018]: I0217 15:09:54.333625 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" event={"ID":"d973c9bc-8097-489c-9b8b-70b775177c41","Type":"ContainerStarted","Data":"7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9"} Feb 17 15:09:54.335170 master-0 kubenswrapper[8018]: I0217 15:09:54.335126 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" event={"ID":"2a162205-f111-49b4-9f46-0b40b6184336","Type":"ContainerStarted","Data":"1e7b4529083cffeef5003957eb03a7afcc09cde5e715114a3708977a54e19b17"} Feb 17 15:09:54.335170 master-0 kubenswrapper[8018]: I0217 15:09:54.335168 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" event={"ID":"2a162205-f111-49b4-9f46-0b40b6184336","Type":"ContainerStarted","Data":"52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76"} Feb 17 15:09:54.336444 master-0 kubenswrapper[8018]: I0217 15:09:54.336415 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" event={"ID":"d075439c-721d-432b-b4f9-9f078132bf92","Type":"ContainerStarted","Data":"93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9"} Feb 17 15:09:54.381049 master-0 kubenswrapper[8018]: I0217 15:09:54.380960 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" podStartSLOduration=549.380939998 podStartE2EDuration="9m9.380939998s" podCreationTimestamp="2026-02-17 15:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:54.377137465 +0000 UTC m=+427.129480515" watchObservedRunningTime="2026-02-17 15:09:54.380939998 +0000 UTC m=+427.133283058" Feb 17 15:09:54.382390 master-0 kubenswrapper[8018]: I0217 15:09:54.382338 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" podStartSLOduration=489.382303281 podStartE2EDuration="8m9.382303281s" podCreationTimestamp="2026-02-17 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:54.354658132 +0000 UTC m=+427.107001212" watchObservedRunningTime="2026-02-17 15:09:54.382303281 +0000 UTC m=+427.134646351" Feb 17 15:09:54.392520 master-0 kubenswrapper[8018]: I0217 15:09:54.392438 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/fix-audit-permissions/0.log" Feb 17 15:09:54.597585 master-0 kubenswrapper[8018]: I0217 15:09:54.597388 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/oauth-apiserver/0.log" Feb 17 15:09:54.806588 master-0 kubenswrapper[8018]: I0217 15:09:54.791035 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:09:54.990386 master-0 kubenswrapper[8018]: I0217 15:09:54.990341 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/3.log" Feb 17 15:09:55.049960 master-0 kubenswrapper[8018]: I0217 15:09:55.049908 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:09:55.050202 master-0 kubenswrapper[8018]: E0217 15:09:55.050156 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:55.050261 master-0 kubenswrapper[8018]: E0217 15:09:55.050240 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:11.05021686 +0000 UTC m=+443.802559910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:09:55.189201 master-0 kubenswrapper[8018]: I0217 15:09:55.189170 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/setup/0.log" Feb 17 15:09:55.305582 master-0 kubenswrapper[8018]: I0217 15:09:55.305538 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx"] Feb 17 15:09:55.306014 master-0 kubenswrapper[8018]: E0217 15:09:55.305988 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" podUID="f0c5ca70-1706-4858-adcb-b421ba1e422b" Feb 17 15:09:55.343291 master-0 kubenswrapper[8018]: I0217 15:09:55.343249 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" event={"ID":"d075439c-721d-432b-b4f9-9f078132bf92","Type":"ContainerStarted","Data":"0566d7b72d85f502ce5f98690c24d0018847f7a1112daac2a8e461667ff4a653"} Feb 17 15:09:55.343734 master-0 kubenswrapper[8018]: I0217 15:09:55.343306 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:55.345729 master-0 kubenswrapper[8018]: I0217 15:09:55.345704 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:55.351447 master-0 kubenswrapper[8018]: I0217 15:09:55.351410 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:09:55.351697 master-0 kubenswrapper[8018]: I0217 15:09:55.351671 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:55.381010 master-0 kubenswrapper[8018]: I0217 15:09:55.380937 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" podStartSLOduration=376.806767132 podStartE2EDuration="6m18.380912969s" podCreationTimestamp="2026-02-17 15:03:37 +0000 UTC" firstStartedPulling="2026-02-17 15:09:53.564240167 +0000 UTC m=+426.316583257" lastFinishedPulling="2026-02-17 15:09:55.138386044 +0000 UTC m=+427.890729094" observedRunningTime="2026-02-17 15:09:55.360892277 +0000 UTC m=+428.113235337" watchObservedRunningTime="2026-02-17 15:09:55.380912969 +0000 UTC m=+428.133256029" Feb 17 15:09:55.395168 master-0 kubenswrapper[8018]: I0217 15:09:55.395127 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-ensure-env-vars/0.log" Feb 17 15:09:55.455755 master-0 kubenswrapper[8018]: I0217 15:09:55.455710 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hk8s\" (UniqueName: \"kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s\") pod \"f0c5ca70-1706-4858-adcb-b421ba1e422b\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " Feb 17 15:09:55.456483 master-0 kubenswrapper[8018]: I0217 15:09:55.456233 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config" (OuterVolumeSpecName: "config") pod "f0c5ca70-1706-4858-adcb-b421ba1e422b" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:09:55.456483 master-0 kubenswrapper[8018]: I0217 15:09:55.455769 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config\") pod \"f0c5ca70-1706-4858-adcb-b421ba1e422b\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " Feb 17 15:09:55.456483 master-0 kubenswrapper[8018]: I0217 15:09:55.456439 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config\") pod \"f0c5ca70-1706-4858-adcb-b421ba1e422b\" (UID: \"f0c5ca70-1706-4858-adcb-b421ba1e422b\") " Feb 17 15:09:55.456824 master-0 kubenswrapper[8018]: I0217 15:09:55.456732 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "f0c5ca70-1706-4858-adcb-b421ba1e422b" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:09:55.457101 master-0 kubenswrapper[8018]: I0217 15:09:55.457077 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:55.457151 master-0 kubenswrapper[8018]: I0217 15:09:55.457114 8018 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0c5ca70-1706-4858-adcb-b421ba1e422b-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:55.473400 master-0 kubenswrapper[8018]: I0217 15:09:55.473355 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s" (OuterVolumeSpecName: "kube-api-access-9hk8s") pod "f0c5ca70-1706-4858-adcb-b421ba1e422b" (UID: "f0c5ca70-1706-4858-adcb-b421ba1e422b"). InnerVolumeSpecName "kube-api-access-9hk8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:09:55.558491 master-0 kubenswrapper[8018]: I0217 15:09:55.558242 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hk8s\" (UniqueName: \"kubernetes.io/projected/f0c5ca70-1706-4858-adcb-b421ba1e422b-kube-api-access-9hk8s\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:55.589411 master-0 kubenswrapper[8018]: I0217 15:09:55.589344 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-resources-copy/0.log" Feb 17 15:09:55.789478 master-0 kubenswrapper[8018]: I0217 15:09:55.789407 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 17 15:09:55.886761 master-0 kubenswrapper[8018]: I0217 15:09:55.886707 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-nzz2j"] Feb 17 15:09:55.887702 master-0 kubenswrapper[8018]: I0217 15:09:55.887673 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:55.893441 master-0 kubenswrapper[8018]: I0217 15:09:55.893381 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 15:09:55.893830 master-0 kubenswrapper[8018]: I0217 15:09:55.893793 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 15:09:55.893879 master-0 kubenswrapper[8018]: I0217 15:09:55.893825 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8gftr" Feb 17 15:09:55.893879 master-0 kubenswrapper[8018]: I0217 15:09:55.893857 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 15:09:55.910432 master-0 kubenswrapper[8018]: I0217 15:09:55.910382 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-nzz2j"] Feb 17 15:09:55.927787 master-0 kubenswrapper[8018]: I0217 15:09:55.927702 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l576h"] Feb 17 15:09:55.928579 master-0 kubenswrapper[8018]: I0217 15:09:55.928506 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:55.930859 master-0 kubenswrapper[8018]: I0217 15:09:55.930764 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-r65rc" Feb 17 15:09:55.931013 master-0 kubenswrapper[8018]: I0217 15:09:55.930953 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:09:55.931092 master-0 kubenswrapper[8018]: I0217 15:09:55.931045 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964030 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cx29\" (UniqueName: \"kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964114 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964142 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6jm\" (UniqueName: \"kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964195 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964243 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964270 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:55.964618 master-0 kubenswrapper[8018]: I0217 15:09:55.964310 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:55.994637 master-0 kubenswrapper[8018]: I0217 15:09:55.994380 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 17 15:09:56.067715 master-0 kubenswrapper[8018]: I0217 15:09:56.067644 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.067895 master-0 kubenswrapper[8018]: I0217 15:09:56.067748 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.067895 master-0 kubenswrapper[8018]: I0217 15:09:56.067831 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.068095 master-0 kubenswrapper[8018]: I0217 15:09:56.068063 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx29\" (UniqueName: \"kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.068661 master-0 kubenswrapper[8018]: E0217 15:09:56.068325 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:09:56.068661 master-0 kubenswrapper[8018]: E0217 15:09:56.068419 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:09:56.568396207 +0000 UTC m=+429.320739267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:09:56.069003 master-0 kubenswrapper[8018]: I0217 15:09:56.068147 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.069057 master-0 kubenswrapper[8018]: I0217 15:09:56.069018 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6jm\" (UniqueName: \"kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.069156 master-0 kubenswrapper[8018]: I0217 15:09:56.069105 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.070027 master-0 kubenswrapper[8018]: I0217 15:09:56.069977 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.072429 master-0 kubenswrapper[8018]: I0217 15:09:56.072357 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.073618 master-0 kubenswrapper[8018]: I0217 15:09:56.073563 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.076171 master-0 kubenswrapper[8018]: I0217 15:09:56.076123 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.084635 master-0 kubenswrapper[8018]: I0217 15:09:56.084584 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx29\" (UniqueName: \"kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.092980 master-0 kubenswrapper[8018]: I0217 15:09:56.092926 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6jm\" (UniqueName: \"kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.195681 master-0 kubenswrapper[8018]: I0217 15:09:56.195521 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 17 15:09:56.255494 master-0 kubenswrapper[8018]: I0217 15:09:56.255409 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:09:56.276916 master-0 kubenswrapper[8018]: W0217 15:09:56.276839 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9768ef3d_4f12_4303_98cb_56f8ebe05039.slice/crio-9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a WatchSource:0}: Error finding container 9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a: Status 404 returned error can't find the container with id 9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a Feb 17 15:09:56.358581 master-0 kubenswrapper[8018]: I0217 15:09:56.358477 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l576h" event={"ID":"9768ef3d-4f12-4303-98cb-56f8ebe05039","Type":"ContainerStarted","Data":"9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a"} Feb 17 15:09:56.360429 master-0 kubenswrapper[8018]: I0217 15:09:56.360368 8018 generic.go:334] "Generic (PLEG): container finished" podID="2a162205-f111-49b4-9f46-0b40b6184336" containerID="1e7b4529083cffeef5003957eb03a7afcc09cde5e715114a3708977a54e19b17" exitCode=0 Feb 17 15:09:56.360591 master-0 kubenswrapper[8018]: I0217 15:09:56.360502 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" event={"ID":"2a162205-f111-49b4-9f46-0b40b6184336","Type":"ContainerDied","Data":"1e7b4529083cffeef5003957eb03a7afcc09cde5e715114a3708977a54e19b17"} Feb 17 15:09:56.360639 master-0 kubenswrapper[8018]: I0217 15:09:56.360604 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx" Feb 17 15:09:56.394551 master-0 kubenswrapper[8018]: I0217 15:09:56.394500 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-readyz/0.log" Feb 17 15:09:56.424059 master-0 kubenswrapper[8018]: I0217 15:09:56.424001 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx"] Feb 17 15:09:56.427150 master-0 kubenswrapper[8018]: I0217 15:09:56.427102 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx"] Feb 17 15:09:56.448552 master-0 kubenswrapper[8018]: I0217 15:09:56.448413 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s"] Feb 17 15:09:56.449377 master-0 kubenswrapper[8018]: I0217 15:09:56.449347 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.450746 master-0 kubenswrapper[8018]: I0217 15:09:56.450689 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:09:56.451034 master-0 kubenswrapper[8018]: I0217 15:09:56.451008 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:09:56.451930 master-0 kubenswrapper[8018]: I0217 15:09:56.451869 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:09:56.451970 master-0 kubenswrapper[8018]: I0217 15:09:56.451910 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:09:56.452038 master-0 kubenswrapper[8018]: I0217 15:09:56.451995 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:09:56.452080 master-0 kubenswrapper[8018]: I0217 15:09:56.451994 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:09:56.475900 master-0 kubenswrapper[8018]: I0217 15:09:56.475854 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.476056 master-0 kubenswrapper[8018]: I0217 15:09:56.475902 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.476056 master-0 kubenswrapper[8018]: I0217 15:09:56.475938 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.476056 master-0 kubenswrapper[8018]: I0217 15:09:56.476031 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhm88\" (UniqueName: \"kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.476180 master-0 kubenswrapper[8018]: I0217 15:09:56.476141 8018 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f0c5ca70-1706-4858-adcb-b421ba1e422b-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:56.577845 master-0 kubenswrapper[8018]: I0217 15:09:56.577744 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:56.578152 master-0 kubenswrapper[8018]: I0217 15:09:56.577891 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.578152 master-0 kubenswrapper[8018]: E0217 15:09:56.578009 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:09:56.578152 master-0 kubenswrapper[8018]: E0217 15:09:56.578152 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:09:57.578112742 +0000 UTC m=+430.330455832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:09:56.578443 master-0 kubenswrapper[8018]: I0217 15:09:56.578029 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.578443 master-0 kubenswrapper[8018]: I0217 15:09:56.578313 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.578735 master-0 kubenswrapper[8018]: E0217 15:09:56.578552 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:56.578735 master-0 kubenswrapper[8018]: E0217 15:09:56.578655 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:09:57.078629395 +0000 UTC m=+429.830972535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:09:56.578735 master-0 kubenswrapper[8018]: I0217 15:09:56.578698 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhm88\" (UniqueName: \"kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.579660 master-0 kubenswrapper[8018]: I0217 15:09:56.579608 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.580017 master-0 kubenswrapper[8018]: I0217 15:09:56.579956 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.593210 master-0 kubenswrapper[8018]: I0217 15:09:56.593160 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 17 15:09:56.606751 master-0 kubenswrapper[8018]: I0217 15:09:56.606697 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhm88\" (UniqueName: \"kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:56.791823 master-0 kubenswrapper[8018]: I0217 15:09:56.791704 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_5de71cc1-08c3-4295-ac86-745c9d4fbb46/installer/0.log" Feb 17 15:09:56.990110 master-0 kubenswrapper[8018]: I0217 15:09:56.990046 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:09:57.091274 master-0 kubenswrapper[8018]: I0217 15:09:57.091110 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:57.091444 master-0 kubenswrapper[8018]: E0217 15:09:57.091367 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:57.091598 master-0 kubenswrapper[8018]: E0217 15:09:57.091510 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:09:58.091482726 +0000 UTC m=+430.843825786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:09:57.192032 master-0 kubenswrapper[8018]: I0217 15:09:57.191949 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/3.log" Feb 17 15:09:57.373618 master-0 kubenswrapper[8018]: I0217 15:09:57.373528 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l576h" event={"ID":"9768ef3d-4f12-4303-98cb-56f8ebe05039","Type":"ContainerStarted","Data":"ae371b281507a41eee4076473ecc36b06d083171ed341725f839c18360ace3a6"} Feb 17 15:09:57.390733 master-0 kubenswrapper[8018]: I0217 15:09:57.390667 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/setup/0.log" Feb 17 15:09:57.405221 master-0 kubenswrapper[8018]: I0217 15:09:57.405092 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l576h" podStartSLOduration=2.405057045 podStartE2EDuration="2.405057045s" podCreationTimestamp="2026-02-17 15:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:09:57.402624455 +0000 UTC m=+430.154967565" watchObservedRunningTime="2026-02-17 15:09:57.405057045 +0000 UTC m=+430.157400135" Feb 17 15:09:57.455566 master-0 kubenswrapper[8018]: I0217 15:09:57.455448 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c5ca70-1706-4858-adcb-b421ba1e422b" path="/var/lib/kubelet/pods/f0c5ca70-1706-4858-adcb-b421ba1e422b/volumes" Feb 17 15:09:57.599090 master-0 kubenswrapper[8018]: I0217 15:09:57.599005 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:57.599571 master-0 kubenswrapper[8018]: E0217 15:09:57.599430 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:09:57.599571 master-0 kubenswrapper[8018]: E0217 15:09:57.599530 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:09:59.599512599 +0000 UTC m=+432.351855739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:09:57.599571 master-0 kubenswrapper[8018]: I0217 15:09:57.599544 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver/0.log" Feb 17 15:09:57.729060 master-0 kubenswrapper[8018]: I0217 15:09:57.728959 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:57.790030 master-0 kubenswrapper[8018]: I0217 15:09:57.789967 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver-insecure-readyz/0.log" Feb 17 15:09:57.903101 master-0 kubenswrapper[8018]: I0217 15:09:57.903012 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6b7\" (UniqueName: \"kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7\") pod \"2a162205-f111-49b4-9f46-0b40b6184336\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " Feb 17 15:09:57.903372 master-0 kubenswrapper[8018]: I0217 15:09:57.903203 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume\") pod \"2a162205-f111-49b4-9f46-0b40b6184336\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " Feb 17 15:09:57.903448 master-0 kubenswrapper[8018]: I0217 15:09:57.903404 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume\") pod \"2a162205-f111-49b4-9f46-0b40b6184336\" (UID: \"2a162205-f111-49b4-9f46-0b40b6184336\") " Feb 17 15:09:57.903934 master-0 kubenswrapper[8018]: I0217 15:09:57.903888 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:09:57.907503 master-0 kubenswrapper[8018]: E0217 15:09:57.904047 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:09:57.907503 master-0 kubenswrapper[8018]: I0217 15:09:57.904099 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a162205-f111-49b4-9f46-0b40b6184336" (UID: "2a162205-f111-49b4-9f46-0b40b6184336"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:09:57.907503 master-0 kubenswrapper[8018]: E0217 15:09:57.904142 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:13.904124749 +0000 UTC m=+446.656467799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:09:57.907503 master-0 kubenswrapper[8018]: I0217 15:09:57.906774 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a162205-f111-49b4-9f46-0b40b6184336" (UID: "2a162205-f111-49b4-9f46-0b40b6184336"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:09:57.909535 master-0 kubenswrapper[8018]: I0217 15:09:57.908869 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7" (OuterVolumeSpecName: "kube-api-access-5h6b7") pod "2a162205-f111-49b4-9f46-0b40b6184336" (UID: "2a162205-f111-49b4-9f46-0b40b6184336"). InnerVolumeSpecName "kube-api-access-5h6b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:09:57.994249 master-0 kubenswrapper[8018]: I0217 15:09:57.994192 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:09:58.005338 master-0 kubenswrapper[8018]: I0217 15:09:58.005285 8018 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a162205-f111-49b4-9f46-0b40b6184336-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:58.005338 master-0 kubenswrapper[8018]: I0217 15:09:58.005325 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h6b7\" (UniqueName: \"kubernetes.io/projected/2a162205-f111-49b4-9f46-0b40b6184336-kube-api-access-5h6b7\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:58.005338 master-0 kubenswrapper[8018]: I0217 15:09:58.005338 8018 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a162205-f111-49b4-9f46-0b40b6184336-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:09:58.107257 master-0 kubenswrapper[8018]: I0217 15:09:58.107153 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:09:58.107561 master-0 kubenswrapper[8018]: E0217 15:09:58.107389 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:09:58.107561 master-0 kubenswrapper[8018]: E0217 15:09:58.107508 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:10:00.107484391 +0000 UTC m=+432.859827451 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:09:58.197954 master-0 kubenswrapper[8018]: I0217 15:09:58.197791 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_d5655115-c223-42ed-a93d-9d609e55c901/installer/0.log" Feb 17 15:09:58.395444 master-0 kubenswrapper[8018]: I0217 15:09:58.395369 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:09:58.395444 master-0 kubenswrapper[8018]: I0217 15:09:58.395437 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" event={"ID":"2a162205-f111-49b4-9f46-0b40b6184336","Type":"ContainerDied","Data":"52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76"} Feb 17 15:09:58.395444 master-0 kubenswrapper[8018]: I0217 15:09:58.395491 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76" Feb 17 15:09:58.400035 master-0 kubenswrapper[8018]: I0217 15:09:58.399969 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:09:58.597533 master-0 kubenswrapper[8018]: I0217 15:09:58.597309 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/0.log" Feb 17 15:09:58.797702 master-0 kubenswrapper[8018]: I0217 15:09:58.797619 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:09:58.993353 master-0 kubenswrapper[8018]: I0217 15:09:58.993205 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-recovery-controller/0.log" Feb 17 15:09:59.191386 master-0 kubenswrapper[8018]: I0217 15:09:59.191296 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:09:59.393355 master-0 kubenswrapper[8018]: I0217 15:09:59.393271 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/3.log" Feb 17 15:09:59.597859 master-0 kubenswrapper[8018]: I0217 15:09:59.597818 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_9460ca0802075a8a6a10d7b3e6052c4d/kube-scheduler/0.log" Feb 17 15:09:59.631861 master-0 kubenswrapper[8018]: I0217 15:09:59.631777 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:09:59.632158 master-0 kubenswrapper[8018]: E0217 15:09:59.631967 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:09:59.632158 master-0 kubenswrapper[8018]: E0217 15:09:59.632041 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:10:03.632022641 +0000 UTC m=+436.384365701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:09:59.794968 master-0 kubenswrapper[8018]: I0217 15:09:59.794431 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_9460ca0802075a8a6a10d7b3e6052c4d/kube-scheduler/1.log" Feb 17 15:09:59.993331 master-0 kubenswrapper[8018]: I0217 15:09:59.993261 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:10:00.141191 master-0 kubenswrapper[8018]: I0217 15:10:00.141106 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:10:00.141541 master-0 kubenswrapper[8018]: E0217 15:10:00.141344 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:10:00.141541 master-0 kubenswrapper[8018]: E0217 15:10:00.141520 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:10:04.141445788 +0000 UTC m=+436.893788868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:10:00.201748 master-0 kubenswrapper[8018]: I0217 15:10:00.201510 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:10:01.758407 master-0 kubenswrapper[8018]: I0217 15:10:01.758295 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/2.log" Feb 17 15:10:01.910911 master-0 kubenswrapper[8018]: I0217 15:10:01.910834 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:10:01.919295 master-0 kubenswrapper[8018]: I0217 15:10:01.919254 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/2.log" Feb 17 15:10:01.932891 master-0 kubenswrapper[8018]: I0217 15:10:01.932857 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/fix-audit-permissions/0.log" Feb 17 15:10:01.948639 master-0 kubenswrapper[8018]: I0217 15:10:01.948554 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd"] Feb 17 15:10:01.948946 master-0 kubenswrapper[8018]: I0217 15:10:01.948906 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="cluster-cloud-controller-manager" containerID="cri-o://be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" gracePeriod=30 Feb 17 15:10:01.949002 master-0 kubenswrapper[8018]: I0217 15:10:01.948967 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="config-sync-controllers" containerID="cri-o://eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" gracePeriod=30 Feb 17 15:10:01.952252 master-0 kubenswrapper[8018]: I0217 15:10:01.952183 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/openshift-apiserver/0.log" Feb 17 15:10:01.978323 master-0 kubenswrapper[8018]: I0217 15:10:01.978002 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/openshift-apiserver-check-endpoints/0.log" Feb 17 15:10:01.986576 master-0 kubenswrapper[8018]: I0217 15:10:01.986540 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:10:01.999777 master-0 kubenswrapper[8018]: I0217 15:10:01.999734 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/3.log" Feb 17 15:10:02.013107 master-0 kubenswrapper[8018]: I0217 15:10:02.012992 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:10:02.123529 master-0 kubenswrapper[8018]: I0217 15:10:02.123445 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/1.log" Feb 17 15:10:02.134603 master-0 kubenswrapper[8018]: I0217 15:10:02.124901 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:10:02.191490 master-0 kubenswrapper[8018]: I0217 15:10:02.191334 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/3.log" Feb 17 15:10:02.274320 master-0 kubenswrapper[8018]: I0217 15:10:02.274098 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config\") pod \"317bc9db-ab82-4df1-81da-1a091f88acb1\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " Feb 17 15:10:02.274644 master-0 kubenswrapper[8018]: I0217 15:10:02.274512 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images\") pod \"317bc9db-ab82-4df1-81da-1a091f88acb1\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " Feb 17 15:10:02.275152 master-0 kubenswrapper[8018]: I0217 15:10:02.275081 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "317bc9db-ab82-4df1-81da-1a091f88acb1" (UID: "317bc9db-ab82-4df1-81da-1a091f88acb1"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:10:02.275238 master-0 kubenswrapper[8018]: I0217 15:10:02.275194 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images" (OuterVolumeSpecName: "images") pod "317bc9db-ab82-4df1-81da-1a091f88acb1" (UID: "317bc9db-ab82-4df1-81da-1a091f88acb1"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:10:02.276112 master-0 kubenswrapper[8018]: I0217 15:10:02.276063 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls\") pod \"317bc9db-ab82-4df1-81da-1a091f88acb1\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " Feb 17 15:10:02.276209 master-0 kubenswrapper[8018]: I0217 15:10:02.276137 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgqwz\" (UniqueName: \"kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz\") pod \"317bc9db-ab82-4df1-81da-1a091f88acb1\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " Feb 17 15:10:02.276209 master-0 kubenswrapper[8018]: I0217 15:10:02.276189 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube\") pod \"317bc9db-ab82-4df1-81da-1a091f88acb1\" (UID: \"317bc9db-ab82-4df1-81da-1a091f88acb1\") " Feb 17 15:10:02.276447 master-0 kubenswrapper[8018]: I0217 15:10:02.276381 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "317bc9db-ab82-4df1-81da-1a091f88acb1" (UID: "317bc9db-ab82-4df1-81da-1a091f88acb1"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:10:02.276870 master-0 kubenswrapper[8018]: I0217 15:10:02.276815 8018 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-images\") on node \"master-0\" DevicePath \"\"" Feb 17 15:10:02.276870 master-0 kubenswrapper[8018]: I0217 15:10:02.276864 8018 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/317bc9db-ab82-4df1-81da-1a091f88acb1-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 17 15:10:02.277020 master-0 kubenswrapper[8018]: I0217 15:10:02.276885 8018 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/317bc9db-ab82-4df1-81da-1a091f88acb1-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:10:02.280248 master-0 kubenswrapper[8018]: I0217 15:10:02.280170 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "317bc9db-ab82-4df1-81da-1a091f88acb1" (UID: "317bc9db-ab82-4df1-81da-1a091f88acb1"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:10:02.280382 master-0 kubenswrapper[8018]: I0217 15:10:02.280325 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz" (OuterVolumeSpecName: "kube-api-access-vgqwz") pod "317bc9db-ab82-4df1-81da-1a091f88acb1" (UID: "317bc9db-ab82-4df1-81da-1a091f88acb1"). InnerVolumeSpecName "kube-api-access-vgqwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:10:02.378335 master-0 kubenswrapper[8018]: I0217 15:10:02.378257 8018 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/317bc9db-ab82-4df1-81da-1a091f88acb1-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:10:02.378335 master-0 kubenswrapper[8018]: I0217 15:10:02.378307 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgqwz\" (UniqueName: \"kubernetes.io/projected/317bc9db-ab82-4df1-81da-1a091f88acb1-kube-api-access-vgqwz\") on node \"master-0\" DevicePath \"\"" Feb 17 15:10:02.398569 master-0 kubenswrapper[8018]: I0217 15:10:02.398517 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-b9c8fdfbc-rh9v2_e6d0ea7a-6784-4c13-ad65-6c947dbcf136/controller-manager/0.log" Feb 17 15:10:02.436170 master-0 kubenswrapper[8018]: I0217 15:10:02.436089 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_317bc9db-ab82-4df1-81da-1a091f88acb1/kube-rbac-proxy/1.log" Feb 17 15:10:02.437788 master-0 kubenswrapper[8018]: I0217 15:10:02.437736 8018 generic.go:334] "Generic (PLEG): container finished" podID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerID="eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" exitCode=0 Feb 17 15:10:02.437788 master-0 kubenswrapper[8018]: I0217 15:10:02.437773 8018 generic.go:334] "Generic (PLEG): container finished" podID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerID="be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" exitCode=0 Feb 17 15:10:02.437979 master-0 kubenswrapper[8018]: I0217 15:10:02.437883 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" Feb 17 15:10:02.438047 master-0 kubenswrapper[8018]: I0217 15:10:02.437839 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerDied","Data":"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195"} Feb 17 15:10:02.438174 master-0 kubenswrapper[8018]: I0217 15:10:02.438137 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerDied","Data":"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082"} Feb 17 15:10:02.438252 master-0 kubenswrapper[8018]: I0217 15:10:02.438182 8018 scope.go:117] "RemoveContainer" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:10:02.438252 master-0 kubenswrapper[8018]: I0217 15:10:02.438221 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd" event={"ID":"317bc9db-ab82-4df1-81da-1a091f88acb1","Type":"ContainerDied","Data":"39a89d3fb0a0f6c3684ee19a728b5493a8d5502ad64785157e799cf5d06dece2"} Feb 17 15:10:02.465166 master-0 kubenswrapper[8018]: I0217 15:10:02.465119 8018 scope.go:117] "RemoveContainer" containerID="eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" Feb 17 15:10:02.496554 master-0 kubenswrapper[8018]: I0217 15:10:02.496457 8018 scope.go:117] "RemoveContainer" containerID="be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" Feb 17 15:10:02.511439 master-0 kubenswrapper[8018]: I0217 15:10:02.511390 8018 scope.go:117] "RemoveContainer" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:10:02.511895 master-0 kubenswrapper[8018]: E0217 15:10:02.511829 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921\": container with ID starting with 0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921 not found: ID does not exist" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:10:02.511964 master-0 kubenswrapper[8018]: I0217 15:10:02.511894 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921"} err="failed to get container status \"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921\": rpc error: code = NotFound desc = could not find container \"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921\": container with ID starting with 0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921 not found: ID does not exist" Feb 17 15:10:02.511964 master-0 kubenswrapper[8018]: I0217 15:10:02.511929 8018 scope.go:117] "RemoveContainer" containerID="eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" Feb 17 15:10:02.512264 master-0 kubenswrapper[8018]: E0217 15:10:02.512235 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195\": container with ID starting with eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195 not found: ID does not exist" containerID="eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" Feb 17 15:10:02.512335 master-0 kubenswrapper[8018]: I0217 15:10:02.512274 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195"} err="failed to get container status \"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195\": rpc error: code = NotFound desc = could not find container \"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195\": container with ID starting with eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195 not found: ID does not exist" Feb 17 15:10:02.512335 master-0 kubenswrapper[8018]: I0217 15:10:02.512304 8018 scope.go:117] "RemoveContainer" containerID="be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" Feb 17 15:10:02.512639 master-0 kubenswrapper[8018]: E0217 15:10:02.512602 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082\": container with ID starting with be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082 not found: ID does not exist" containerID="be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" Feb 17 15:10:02.512719 master-0 kubenswrapper[8018]: I0217 15:10:02.512653 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082"} err="failed to get container status \"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082\": rpc error: code = NotFound desc = could not find container \"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082\": container with ID starting with be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082 not found: ID does not exist" Feb 17 15:10:02.512719 master-0 kubenswrapper[8018]: I0217 15:10:02.512688 8018 scope.go:117] "RemoveContainer" containerID="0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921" Feb 17 15:10:02.513091 master-0 kubenswrapper[8018]: I0217 15:10:02.513056 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921"} err="failed to get container status \"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921\": rpc error: code = NotFound desc = could not find container \"0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921\": container with ID starting with 0f854b02f9a47f01a182111fe0dc6d75ecf5c989479c43d48c11fcbb9213a921 not found: ID does not exist" Feb 17 15:10:02.513091 master-0 kubenswrapper[8018]: I0217 15:10:02.513083 8018 scope.go:117] "RemoveContainer" containerID="eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195" Feb 17 15:10:02.513341 master-0 kubenswrapper[8018]: I0217 15:10:02.513319 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195"} err="failed to get container status \"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195\": rpc error: code = NotFound desc = could not find container \"eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195\": container with ID starting with eb56ef055c1955a49133c46d779e4a9321151a34882510c0bf04a118009cb195 not found: ID does not exist" Feb 17 15:10:02.513341 master-0 kubenswrapper[8018]: I0217 15:10:02.513339 8018 scope.go:117] "RemoveContainer" containerID="be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082" Feb 17 15:10:02.513605 master-0 kubenswrapper[8018]: I0217 15:10:02.513580 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082"} err="failed to get container status \"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082\": rpc error: code = NotFound desc = could not find container \"be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082\": container with ID starting with be1509239dcca9f09c72f6d3a6a542c21a0ad0aab33fa18f080786b5877fb082 not found: ID does not exist" Feb 17 15:10:02.539853 master-0 kubenswrapper[8018]: I0217 15:10:02.539660 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd"] Feb 17 15:10:02.541756 master-0 kubenswrapper[8018]: I0217 15:10:02.541656 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd"] Feb 17 15:10:02.578272 master-0 kubenswrapper[8018]: I0217 15:10:02.578189 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c"] Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: E0217 15:10:02.578442 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="cluster-cloud-controller-manager" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: I0217 15:10:02.578584 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="cluster-cloud-controller-manager" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: E0217 15:10:02.578608 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="config-sync-controllers" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: I0217 15:10:02.578617 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="config-sync-controllers" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: E0217 15:10:02.578640 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: I0217 15:10:02.578648 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: E0217 15:10:02.578662 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.578665 master-0 kubenswrapper[8018]: I0217 15:10:02.578670 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.578833 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.578849 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.578868 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="config-sync-controllers" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.578878 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="cluster-cloud-controller-manager" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: E0217 15:10:02.579029 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.579097 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.579675 master-0 kubenswrapper[8018]: I0217 15:10:02.579266 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" containerName="kube-rbac-proxy" Feb 17 15:10:02.580972 master-0 kubenswrapper[8018]: I0217 15:10:02.580116 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.582945 master-0 kubenswrapper[8018]: I0217 15:10:02.582895 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dkdg8" Feb 17 15:10:02.583121 master-0 kubenswrapper[8018]: I0217 15:10:02.582892 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:10:02.583121 master-0 kubenswrapper[8018]: I0217 15:10:02.582961 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 17 15:10:02.583121 master-0 kubenswrapper[8018]: I0217 15:10:02.582892 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:10:02.583121 master-0 kubenswrapper[8018]: I0217 15:10:02.583068 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 17 15:10:02.583595 master-0 kubenswrapper[8018]: I0217 15:10:02.582951 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 17 15:10:02.607386 master-0 kubenswrapper[8018]: I0217 15:10:02.607306 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-b9c8fdfbc-rh9v2_e6d0ea7a-6784-4c13-ad65-6c947dbcf136/controller-manager/1.log" Feb 17 15:10:02.681432 master-0 kubenswrapper[8018]: I0217 15:10:02.681341 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.681432 master-0 kubenswrapper[8018]: I0217 15:10:02.681471 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.681905 master-0 kubenswrapper[8018]: I0217 15:10:02.681500 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lw7x\" (UniqueName: \"kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.681905 master-0 kubenswrapper[8018]: I0217 15:10:02.681530 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.681905 master-0 kubenswrapper[8018]: I0217 15:10:02.681783 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.783075 master-0 kubenswrapper[8018]: I0217 15:10:02.782993 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.784078 master-0 kubenswrapper[8018]: I0217 15:10:02.783289 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.784078 master-0 kubenswrapper[8018]: I0217 15:10:02.783354 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.784078 master-0 kubenswrapper[8018]: I0217 15:10:02.783444 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.784078 master-0 kubenswrapper[8018]: I0217 15:10:02.783505 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lw7x\" (UniqueName: \"kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.784078 master-0 kubenswrapper[8018]: I0217 15:10:02.783986 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.785281 master-0 kubenswrapper[8018]: I0217 15:10:02.784712 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.785841 master-0 kubenswrapper[8018]: I0217 15:10:02.785783 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.787424 master-0 kubenswrapper[8018]: I0217 15:10:02.787384 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.822804 master-0 kubenswrapper[8018]: I0217 15:10:02.822665 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:10:02.827048 master-0 kubenswrapper[8018]: I0217 15:10:02.827008 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lw7x\" (UniqueName: \"kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.907997 master-0 kubenswrapper[8018]: I0217 15:10:02.907932 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:10:02.993123 master-0 kubenswrapper[8018]: I0217 15:10:02.993069 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/3.log" Feb 17 15:10:03.198377 master-0 kubenswrapper[8018]: I0217 15:10:03.198284 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-588944557d-kjh2v_08e27254-e906-484a-b346-036f898be3ae/catalog-operator/0.log" Feb 17 15:10:03.389686 master-0 kubenswrapper[8018]: I0217 15:10:03.389615 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29522340-8cp6h_2a162205-f111-49b4-9f46-0b40b6184336/collect-profiles/0.log" Feb 17 15:10:03.451892 master-0 kubenswrapper[8018]: I0217 15:10:03.451782 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317bc9db-ab82-4df1-81da-1a091f88acb1" path="/var/lib/kubelet/pods/317bc9db-ab82-4df1-81da-1a091f88acb1/volumes" Feb 17 15:10:03.482409 master-0 kubenswrapper[8018]: W0217 15:10:03.482360 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14723cb7_2d96_42b7_b559_70386c4c841c.slice/crio-0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad WatchSource:0}: Error finding container 0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad: Status 404 returned error can't find the container with id 0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad Feb 17 15:10:03.595732 master-0 kubenswrapper[8018]: I0217 15:10:03.595682 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b56bd877c-tk8xm_257db04b-7203-4a1d-b3d4-bd4db258a3cc/olm-operator/0.log" Feb 17 15:10:03.692630 master-0 kubenswrapper[8018]: I0217 15:10:03.692574 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:10:03.692761 master-0 kubenswrapper[8018]: E0217 15:10:03.692731 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:10:03.692834 master-0 kubenswrapper[8018]: E0217 15:10:03.692815 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:10:11.69279411 +0000 UTC m=+444.445137180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:10:04.162097 master-0 kubenswrapper[8018]: I0217 15:10:04.162021 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:10:04.199788 master-0 kubenswrapper[8018]: I0217 15:10:04.199557 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:10:04.199970 master-0 kubenswrapper[8018]: E0217 15:10:04.199840 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:10:04.199970 master-0 kubenswrapper[8018]: E0217 15:10:04.199940 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:10:12.19991298 +0000 UTC m=+444.952256050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:10:04.200192 master-0 kubenswrapper[8018]: I0217 15:10:04.200144 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/kube-rbac-proxy/0.log" Feb 17 15:10:04.393285 master-0 kubenswrapper[8018]: I0217 15:10:04.393242 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/1.log" Feb 17 15:10:04.453878 master-0 kubenswrapper[8018]: I0217 15:10:04.453835 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"426e84564cdde730130665e18be2c56771ee413958b73511ab6a3d57c4226dd6"} Feb 17 15:10:04.454032 master-0 kubenswrapper[8018]: I0217 15:10:04.453885 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"7b0bc73a19929878c76a20f8913258b82b0659b1d457e21ec06a82cf6b136195"} Feb 17 15:10:04.454032 master-0 kubenswrapper[8018]: I0217 15:10:04.453903 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad"} Feb 17 15:10:04.455745 master-0 kubenswrapper[8018]: I0217 15:10:04.455723 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" event={"ID":"a2d6e329-7ad8-4fc2-accc-66827f11743d","Type":"ContainerStarted","Data":"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591"} Feb 17 15:10:04.477388 master-0 kubenswrapper[8018]: I0217 15:10:04.477293 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podStartSLOduration=375.9358342 podStartE2EDuration="6m26.47727402s" podCreationTimestamp="2026-02-17 15:03:38 +0000 UTC" firstStartedPulling="2026-02-17 15:09:53.024748571 +0000 UTC m=+425.777091631" lastFinishedPulling="2026-02-17 15:10:03.566188361 +0000 UTC m=+436.318531451" observedRunningTime="2026-02-17 15:10:04.47604899 +0000 UTC m=+437.228392050" watchObservedRunningTime="2026-02-17 15:10:04.47727402 +0000 UTC m=+437.229617070" Feb 17 15:10:04.594182 master-0 kubenswrapper[8018]: I0217 15:10:04.594119 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-67d4dbd88b-szr25_b58e9d93-7683-440d-a603-9543e5455490/packageserver/0.log" Feb 17 15:10:04.979971 master-0 kubenswrapper[8018]: I0217 15:10:04.979895 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:10:04.983078 master-0 kubenswrapper[8018]: I0217 15:10:04.983041 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:04.983078 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:04.983078 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:04.983078 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:04.983334 master-0 kubenswrapper[8018]: I0217 15:10:04.983103 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:05.467918 master-0 kubenswrapper[8018]: I0217 15:10:05.467807 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/0.log" Feb 17 15:10:05.469084 master-0 kubenswrapper[8018]: I0217 15:10:05.468997 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="bba706cfc465ab241ea3310492ef8eec45c5cc575961e69581ab1488e1dcfe42" exitCode=1 Feb 17 15:10:05.469224 master-0 kubenswrapper[8018]: I0217 15:10:05.469168 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"bba706cfc465ab241ea3310492ef8eec45c5cc575961e69581ab1488e1dcfe42"} Feb 17 15:10:05.469840 master-0 kubenswrapper[8018]: I0217 15:10:05.469786 8018 scope.go:117] "RemoveContainer" containerID="bba706cfc465ab241ea3310492ef8eec45c5cc575961e69581ab1488e1dcfe42" Feb 17 15:10:05.982510 master-0 kubenswrapper[8018]: I0217 15:10:05.982319 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:05.982510 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:05.982510 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:05.982510 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:05.982510 master-0 kubenswrapper[8018]: I0217 15:10:05.982406 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:06.479570 master-0 kubenswrapper[8018]: I0217 15:10:06.479496 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/1.log" Feb 17 15:10:06.480655 master-0 kubenswrapper[8018]: I0217 15:10:06.480569 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/0.log" Feb 17 15:10:06.482122 master-0 kubenswrapper[8018]: I0217 15:10:06.482041 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61" exitCode=1 Feb 17 15:10:06.482298 master-0 kubenswrapper[8018]: I0217 15:10:06.482186 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61"} Feb 17 15:10:06.482298 master-0 kubenswrapper[8018]: I0217 15:10:06.482261 8018 scope.go:117] "RemoveContainer" containerID="bba706cfc465ab241ea3310492ef8eec45c5cc575961e69581ab1488e1dcfe42" Feb 17 15:10:06.483029 master-0 kubenswrapper[8018]: I0217 15:10:06.482978 8018 scope.go:117] "RemoveContainer" containerID="370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61" Feb 17 15:10:06.483342 master-0 kubenswrapper[8018]: E0217 15:10:06.483259 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:10:06.981339 master-0 kubenswrapper[8018]: I0217 15:10:06.981295 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:06.981339 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:06.981339 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:06.981339 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:06.981760 master-0 kubenswrapper[8018]: I0217 15:10:06.981729 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:07.492120 master-0 kubenswrapper[8018]: I0217 15:10:07.491914 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/1.log" Feb 17 15:10:07.495958 master-0 kubenswrapper[8018]: I0217 15:10:07.495557 8018 scope.go:117] "RemoveContainer" containerID="370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61" Feb 17 15:10:07.496539 master-0 kubenswrapper[8018]: E0217 15:10:07.496451 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:10:07.982703 master-0 kubenswrapper[8018]: I0217 15:10:07.982638 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:07.982703 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:07.982703 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:07.982703 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:07.982703 master-0 kubenswrapper[8018]: I0217 15:10:07.982704 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:08.287298 master-0 kubenswrapper[8018]: I0217 15:10:08.287158 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:10:08.287503 master-0 kubenswrapper[8018]: E0217 15:10:08.287374 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:10:08.287548 master-0 kubenswrapper[8018]: E0217 15:10:08.287509 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:40.287484607 +0000 UTC m=+473.039827707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:10:08.983345 master-0 kubenswrapper[8018]: I0217 15:10:08.983265 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:08.983345 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:08.983345 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:08.983345 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:08.983345 master-0 kubenswrapper[8018]: I0217 15:10:08.983339 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:09.608589 master-0 kubenswrapper[8018]: I0217 15:10:09.607901 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:10:09.608589 master-0 kubenswrapper[8018]: E0217 15:10:09.608231 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:10:09.608589 master-0 kubenswrapper[8018]: E0217 15:10:09.608400 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:41.608359077 +0000 UTC m=+474.360702227 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:10:09.983172 master-0 kubenswrapper[8018]: I0217 15:10:09.983099 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:09.983172 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:09.983172 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:09.983172 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:09.984122 master-0 kubenswrapper[8018]: I0217 15:10:09.983220 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:10.982816 master-0 kubenswrapper[8018]: I0217 15:10:10.982727 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:10.982816 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:10.982816 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:10.982816 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:10.982816 master-0 kubenswrapper[8018]: I0217 15:10:10.982810 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:11.128407 master-0 kubenswrapper[8018]: I0217 15:10:11.128324 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:10:11.128998 master-0 kubenswrapper[8018]: E0217 15:10:11.128639 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:10:11.128998 master-0 kubenswrapper[8018]: E0217 15:10:11.128753 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:43.128726915 +0000 UTC m=+475.881070035 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:10:11.738178 master-0 kubenswrapper[8018]: I0217 15:10:11.738064 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:10:11.738586 master-0 kubenswrapper[8018]: E0217 15:10:11.738348 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:10:11.738586 master-0 kubenswrapper[8018]: E0217 15:10:11.738547 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:10:27.738514596 +0000 UTC m=+460.490857646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:10:11.982706 master-0 kubenswrapper[8018]: I0217 15:10:11.982634 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:11.982706 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:11.982706 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:11.982706 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:11.983113 master-0 kubenswrapper[8018]: I0217 15:10:11.982731 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:12.244901 master-0 kubenswrapper[8018]: I0217 15:10:12.244784 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:10:12.245954 master-0 kubenswrapper[8018]: E0217 15:10:12.245018 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:10:12.245954 master-0 kubenswrapper[8018]: E0217 15:10:12.245112 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:10:28.245086653 +0000 UTC m=+460.997429743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:10:12.980345 master-0 kubenswrapper[8018]: I0217 15:10:12.980258 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:10:12.982322 master-0 kubenswrapper[8018]: I0217 15:10:12.982268 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:12.982322 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:12.982322 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:12.982322 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:12.982542 master-0 kubenswrapper[8018]: I0217 15:10:12.982352 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:13.969913 master-0 kubenswrapper[8018]: I0217 15:10:13.969847 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:10:13.970576 master-0 kubenswrapper[8018]: E0217 15:10:13.970360 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:10:13.970576 master-0 kubenswrapper[8018]: E0217 15:10:13.970502 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:45.970478824 +0000 UTC m=+478.722821884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:10:13.982419 master-0 kubenswrapper[8018]: I0217 15:10:13.982347 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:13.982419 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:13.982419 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:13.982419 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:13.982695 master-0 kubenswrapper[8018]: I0217 15:10:13.982481 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:14.982889 master-0 kubenswrapper[8018]: I0217 15:10:14.982814 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:14.982889 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:14.982889 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:14.982889 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:14.984069 master-0 kubenswrapper[8018]: I0217 15:10:14.982905 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:15.982722 master-0 kubenswrapper[8018]: I0217 15:10:15.982648 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:15.982722 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:15.982722 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:15.982722 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:15.982722 master-0 kubenswrapper[8018]: I0217 15:10:15.982721 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:16.982997 master-0 kubenswrapper[8018]: I0217 15:10:16.982907 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:16.982997 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:16.982997 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:16.982997 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:16.983818 master-0 kubenswrapper[8018]: I0217 15:10:16.983006 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:17.984167 master-0 kubenswrapper[8018]: I0217 15:10:17.983959 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:17.984167 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:17.984167 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:17.984167 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:17.984167 master-0 kubenswrapper[8018]: I0217 15:10:17.984063 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:18.983553 master-0 kubenswrapper[8018]: I0217 15:10:18.983486 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:18.983553 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:18.983553 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:18.983553 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:18.983985 master-0 kubenswrapper[8018]: I0217 15:10:18.983583 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:19.982813 master-0 kubenswrapper[8018]: I0217 15:10:19.982707 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:19.982813 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:19.982813 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:19.982813 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:19.983890 master-0 kubenswrapper[8018]: I0217 15:10:19.982823 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:20.439619 master-0 kubenswrapper[8018]: I0217 15:10:20.439561 8018 scope.go:117] "RemoveContainer" containerID="370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61" Feb 17 15:10:20.982872 master-0 kubenswrapper[8018]: I0217 15:10:20.982701 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:20.982872 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:20.982872 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:20.982872 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:20.982872 master-0 kubenswrapper[8018]: I0217 15:10:20.982756 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:21.590785 master-0 kubenswrapper[8018]: I0217 15:10:21.590637 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/2.log" Feb 17 15:10:21.591600 master-0 kubenswrapper[8018]: I0217 15:10:21.591505 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/1.log" Feb 17 15:10:21.593012 master-0 kubenswrapper[8018]: I0217 15:10:21.592704 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8" exitCode=1 Feb 17 15:10:21.593012 master-0 kubenswrapper[8018]: I0217 15:10:21.592769 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8"} Feb 17 15:10:21.593012 master-0 kubenswrapper[8018]: I0217 15:10:21.592830 8018 scope.go:117] "RemoveContainer" containerID="370a9fa39c115ddfb282c3ea06c396a1c401f6a152b22978d7d01a7373e25b61" Feb 17 15:10:21.594481 master-0 kubenswrapper[8018]: I0217 15:10:21.593775 8018 scope.go:117] "RemoveContainer" containerID="ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8" Feb 17 15:10:21.594481 master-0 kubenswrapper[8018]: E0217 15:10:21.594066 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:10:21.982598 master-0 kubenswrapper[8018]: I0217 15:10:21.982508 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:21.982598 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:21.982598 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:21.982598 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:21.982598 master-0 kubenswrapper[8018]: I0217 15:10:21.982590 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:22.606944 master-0 kubenswrapper[8018]: I0217 15:10:22.606837 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/2.log" Feb 17 15:10:22.984435 master-0 kubenswrapper[8018]: I0217 15:10:22.984303 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:22.984435 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:22.984435 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:22.984435 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:22.984435 master-0 kubenswrapper[8018]: I0217 15:10:22.984402 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:23.982753 master-0 kubenswrapper[8018]: I0217 15:10:23.982622 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:23.982753 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:23.982753 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:23.982753 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:23.982753 master-0 kubenswrapper[8018]: I0217 15:10:23.982728 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:24.983334 master-0 kubenswrapper[8018]: I0217 15:10:24.983243 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:24.983334 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:24.983334 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:24.983334 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:24.984339 master-0 kubenswrapper[8018]: I0217 15:10:24.983348 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:25.985537 master-0 kubenswrapper[8018]: I0217 15:10:25.985419 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:25.985537 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:25.985537 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:25.985537 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:25.986942 master-0 kubenswrapper[8018]: I0217 15:10:25.985622 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:26.982737 master-0 kubenswrapper[8018]: I0217 15:10:26.982652 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:26.982737 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:26.982737 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:26.982737 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:26.983276 master-0 kubenswrapper[8018]: I0217 15:10:26.982753 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:27.774706 master-0 kubenswrapper[8018]: I0217 15:10:27.774637 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:10:27.775247 master-0 kubenswrapper[8018]: E0217 15:10:27.774792 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:10:27.775247 master-0 kubenswrapper[8018]: E0217 15:10:27.774891 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:10:59.774865967 +0000 UTC m=+492.527209077 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:10:27.982796 master-0 kubenswrapper[8018]: I0217 15:10:27.982596 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:27.982796 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:27.982796 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:27.982796 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:27.982796 master-0 kubenswrapper[8018]: I0217 15:10:27.982695 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:28.282124 master-0 kubenswrapper[8018]: I0217 15:10:28.281866 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:10:28.282124 master-0 kubenswrapper[8018]: E0217 15:10:28.282103 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:10:28.282617 master-0 kubenswrapper[8018]: E0217 15:10:28.282250 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:11:00.282200632 +0000 UTC m=+493.034543722 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:10:28.981632 master-0 kubenswrapper[8018]: I0217 15:10:28.981545 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:28.981632 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:28.981632 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:28.981632 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:28.981632 master-0 kubenswrapper[8018]: I0217 15:10:28.981626 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:29.983059 master-0 kubenswrapper[8018]: I0217 15:10:29.982959 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:29.983059 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:29.983059 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:29.983059 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:29.983059 master-0 kubenswrapper[8018]: I0217 15:10:29.983045 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:30.982675 master-0 kubenswrapper[8018]: I0217 15:10:30.982567 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:30.982675 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:30.982675 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:30.982675 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:30.983835 master-0 kubenswrapper[8018]: I0217 15:10:30.982705 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:31.982301 master-0 kubenswrapper[8018]: I0217 15:10:31.982208 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:31.982301 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:31.982301 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:31.982301 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:31.983003 master-0 kubenswrapper[8018]: I0217 15:10:31.982319 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:32.981916 master-0 kubenswrapper[8018]: I0217 15:10:32.981817 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:32.981916 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:32.981916 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:32.981916 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:32.981916 master-0 kubenswrapper[8018]: I0217 15:10:32.981886 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:33.983343 master-0 kubenswrapper[8018]: I0217 15:10:33.983245 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:33.983343 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:33.983343 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:33.983343 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:33.983343 master-0 kubenswrapper[8018]: I0217 15:10:33.983347 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:34.439984 master-0 kubenswrapper[8018]: I0217 15:10:34.439903 8018 scope.go:117] "RemoveContainer" containerID="ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8" Feb 17 15:10:34.440299 master-0 kubenswrapper[8018]: E0217 15:10:34.440140 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:10:34.983199 master-0 kubenswrapper[8018]: I0217 15:10:34.983104 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:34.983199 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:34.983199 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:34.983199 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:34.983199 master-0 kubenswrapper[8018]: I0217 15:10:34.983190 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:35.983810 master-0 kubenswrapper[8018]: I0217 15:10:35.983636 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:35.983810 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:35.983810 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:35.983810 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:35.985188 master-0 kubenswrapper[8018]: I0217 15:10:35.983807 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:36.981912 master-0 kubenswrapper[8018]: I0217 15:10:36.981868 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:36.981912 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:36.981912 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:36.981912 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:36.982246 master-0 kubenswrapper[8018]: I0217 15:10:36.982220 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:37.982248 master-0 kubenswrapper[8018]: I0217 15:10:37.982085 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:37.982248 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:37.982248 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:37.982248 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:37.982248 master-0 kubenswrapper[8018]: I0217 15:10:37.982168 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:38.983300 master-0 kubenswrapper[8018]: I0217 15:10:38.983192 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:38.983300 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:38.983300 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:38.983300 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:38.984283 master-0 kubenswrapper[8018]: I0217 15:10:38.983315 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:39.983078 master-0 kubenswrapper[8018]: I0217 15:10:39.982987 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:39.983078 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:39.983078 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:39.983078 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:39.983078 master-0 kubenswrapper[8018]: I0217 15:10:39.983061 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:40.372927 master-0 kubenswrapper[8018]: I0217 15:10:40.372879 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:10:40.373221 master-0 kubenswrapper[8018]: E0217 15:10:40.373023 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:10:40.373221 master-0 kubenswrapper[8018]: E0217 15:10:40.373072 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:44.373058003 +0000 UTC m=+537.125401053 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:10:40.982422 master-0 kubenswrapper[8018]: I0217 15:10:40.982356 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:40.982422 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:40.982422 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:40.982422 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:40.982864 master-0 kubenswrapper[8018]: I0217 15:10:40.982430 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:41.692536 master-0 kubenswrapper[8018]: I0217 15:10:41.692426 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:10:41.694561 master-0 kubenswrapper[8018]: E0217 15:10:41.692695 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:10:41.694561 master-0 kubenswrapper[8018]: E0217 15:10:41.692783 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:45.692758975 +0000 UTC m=+538.445102055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:10:41.983506 master-0 kubenswrapper[8018]: I0217 15:10:41.983253 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:41.983506 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:41.983506 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:41.983506 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:41.983506 master-0 kubenswrapper[8018]: I0217 15:10:41.983378 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:42.982202 master-0 kubenswrapper[8018]: I0217 15:10:42.982120 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:42.982202 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:42.982202 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:42.982202 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:42.982202 master-0 kubenswrapper[8018]: I0217 15:10:42.982186 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:43.212058 master-0 kubenswrapper[8018]: I0217 15:10:43.211998 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:10:43.212509 master-0 kubenswrapper[8018]: E0217 15:10:43.212490 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:10:43.212647 master-0 kubenswrapper[8018]: E0217 15:10:43.212633 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:47.212610569 +0000 UTC m=+539.964953619 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:10:43.982710 master-0 kubenswrapper[8018]: I0217 15:10:43.982632 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:43.982710 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:43.982710 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:43.982710 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:43.983386 master-0 kubenswrapper[8018]: I0217 15:10:43.982735 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:44.982867 master-0 kubenswrapper[8018]: I0217 15:10:44.982781 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:44.982867 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:44.982867 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:44.982867 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:44.983540 master-0 kubenswrapper[8018]: I0217 15:10:44.982872 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:45.982634 master-0 kubenswrapper[8018]: I0217 15:10:45.982525 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:45.982634 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:45.982634 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:45.982634 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:45.983562 master-0 kubenswrapper[8018]: I0217 15:10:45.982647 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:46.051984 master-0 kubenswrapper[8018]: I0217 15:10:46.051900 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:10:46.052373 master-0 kubenswrapper[8018]: E0217 15:10:46.052072 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:10:46.052373 master-0 kubenswrapper[8018]: E0217 15:10:46.052140 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:50.052124475 +0000 UTC m=+542.804467515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:10:46.982882 master-0 kubenswrapper[8018]: I0217 15:10:46.982759 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:46.982882 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:46.982882 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:46.982882 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:46.982882 master-0 kubenswrapper[8018]: I0217 15:10:46.982862 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:47.983870 master-0 kubenswrapper[8018]: I0217 15:10:47.983659 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:47.983870 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:47.983870 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:47.983870 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:47.983870 master-0 kubenswrapper[8018]: I0217 15:10:47.983750 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:48.981809 master-0 kubenswrapper[8018]: I0217 15:10:48.981770 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:48.981809 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:48.981809 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:48.981809 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:48.982189 master-0 kubenswrapper[8018]: I0217 15:10:48.982164 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:49.441154 master-0 kubenswrapper[8018]: I0217 15:10:49.441066 8018 scope.go:117] "RemoveContainer" containerID="ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8" Feb 17 15:10:49.801195 master-0 kubenswrapper[8018]: I0217 15:10:49.800976 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/2.log" Feb 17 15:10:49.801600 master-0 kubenswrapper[8018]: I0217 15:10:49.801555 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904"} Feb 17 15:10:49.825255 master-0 kubenswrapper[8018]: I0217 15:10:49.824755 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podStartSLOduration=47.824735029 podStartE2EDuration="47.824735029s" podCreationTimestamp="2026-02-17 15:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:10:49.823397816 +0000 UTC m=+482.575740866" watchObservedRunningTime="2026-02-17 15:10:49.824735029 +0000 UTC m=+482.577078069" Feb 17 15:10:49.982263 master-0 kubenswrapper[8018]: I0217 15:10:49.981675 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:49.982263 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:49.982263 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:49.982263 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:49.982263 master-0 kubenswrapper[8018]: I0217 15:10:49.981779 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:50.814796 master-0 kubenswrapper[8018]: I0217 15:10:50.814721 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/3.log" Feb 17 15:10:50.815807 master-0 kubenswrapper[8018]: I0217 15:10:50.815498 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/2.log" Feb 17 15:10:50.816852 master-0 kubenswrapper[8018]: I0217 15:10:50.816787 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" exitCode=1 Feb 17 15:10:50.817032 master-0 kubenswrapper[8018]: I0217 15:10:50.816848 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904"} Feb 17 15:10:50.817032 master-0 kubenswrapper[8018]: I0217 15:10:50.816909 8018 scope.go:117] "RemoveContainer" containerID="ef9536cdbdd3e1f4cbae9514886c228a3d3a39f7462f2953f7a89bb624df09e8" Feb 17 15:10:50.817994 master-0 kubenswrapper[8018]: I0217 15:10:50.817946 8018 scope.go:117] "RemoveContainer" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" Feb 17 15:10:50.818446 master-0 kubenswrapper[8018]: E0217 15:10:50.818384 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:10:50.982108 master-0 kubenswrapper[8018]: I0217 15:10:50.982031 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:50.982108 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:50.982108 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:50.982108 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:50.982503 master-0 kubenswrapper[8018]: I0217 15:10:50.982124 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:51.825052 master-0 kubenswrapper[8018]: I0217 15:10:51.824986 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/3.log" Feb 17 15:10:51.983275 master-0 kubenswrapper[8018]: I0217 15:10:51.983145 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:51.983275 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:51.983275 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:51.983275 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:51.983765 master-0 kubenswrapper[8018]: I0217 15:10:51.983276 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:52.982417 master-0 kubenswrapper[8018]: I0217 15:10:52.982318 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:52.982417 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:52.982417 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:52.982417 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:52.983337 master-0 kubenswrapper[8018]: I0217 15:10:52.982433 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:53.983494 master-0 kubenswrapper[8018]: I0217 15:10:53.983351 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:53.983494 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:53.983494 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:53.983494 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:53.985099 master-0 kubenswrapper[8018]: I0217 15:10:53.983514 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:54.982734 master-0 kubenswrapper[8018]: I0217 15:10:54.982661 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:54.982734 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:54.982734 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:54.982734 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:54.983090 master-0 kubenswrapper[8018]: I0217 15:10:54.982752 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:55.983933 master-0 kubenswrapper[8018]: I0217 15:10:55.983858 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:55.983933 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:55.983933 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:55.983933 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:55.985017 master-0 kubenswrapper[8018]: I0217 15:10:55.983946 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.063630 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6bhf8"] Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.065278 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.068152 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zhjq" Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.068240 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.068286 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:10:56.069052 master-0 kubenswrapper[8018]: I0217 15:10:56.068818 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:10:56.086565 master-0 kubenswrapper[8018]: I0217 15:10:56.085863 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6bhf8"] Feb 17 15:10:56.155111 master-0 kubenswrapper[8018]: I0217 15:10:56.154952 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.155111 master-0 kubenswrapper[8018]: I0217 15:10:56.155038 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ckv\" (UniqueName: \"kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.256902 master-0 kubenswrapper[8018]: I0217 15:10:56.256758 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.256902 master-0 kubenswrapper[8018]: I0217 15:10:56.256843 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ckv\" (UniqueName: \"kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.257148 master-0 kubenswrapper[8018]: E0217 15:10:56.256940 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:10:56.257148 master-0 kubenswrapper[8018]: E0217 15:10:56.257042 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:56.757015782 +0000 UTC m=+489.509358912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:10:56.279576 master-0 kubenswrapper[8018]: I0217 15:10:56.279492 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ckv\" (UniqueName: \"kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.766202 master-0 kubenswrapper[8018]: I0217 15:10:56.766104 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:56.766786 master-0 kubenswrapper[8018]: E0217 15:10:56.766550 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:10:56.766786 master-0 kubenswrapper[8018]: E0217 15:10:56.766678 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:57.766642134 +0000 UTC m=+490.518985224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:10:56.873134 master-0 kubenswrapper[8018]: I0217 15:10:56.873074 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/1.log" Feb 17 15:10:56.874714 master-0 kubenswrapper[8018]: I0217 15:10:56.874683 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/0.log" Feb 17 15:10:56.874839 master-0 kubenswrapper[8018]: I0217 15:10:56.874765 8018 generic.go:334] "Generic (PLEG): container finished" podID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" containerID="4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8" exitCode=1 Feb 17 15:10:56.874839 master-0 kubenswrapper[8018]: I0217 15:10:56.874830 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerDied","Data":"4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8"} Feb 17 15:10:56.874968 master-0 kubenswrapper[8018]: I0217 15:10:56.874928 8018 scope.go:117] "RemoveContainer" containerID="e96d7161de590628bad20a520afcf9b1363c2b5f7629d556a379b4230528784f" Feb 17 15:10:56.875915 master-0 kubenswrapper[8018]: I0217 15:10:56.875643 8018 scope.go:117] "RemoveContainer" containerID="4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8" Feb 17 15:10:56.879211 master-0 kubenswrapper[8018]: E0217 15:10:56.878047 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nclxg_openshift-ingress-operator(22a30079-d7fc-49cf-882e-1c5022cb5bf6)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" Feb 17 15:10:56.983930 master-0 kubenswrapper[8018]: I0217 15:10:56.983851 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:56.983930 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:56.983930 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:56.983930 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:56.984605 master-0 kubenswrapper[8018]: I0217 15:10:56.983957 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:57.782596 master-0 kubenswrapper[8018]: I0217 15:10:57.782508 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:57.782915 master-0 kubenswrapper[8018]: E0217 15:10:57.782885 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:10:57.783104 master-0 kubenswrapper[8018]: E0217 15:10:57.782997 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:10:59.782966337 +0000 UTC m=+492.535309427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:10:57.882957 master-0 kubenswrapper[8018]: I0217 15:10:57.882878 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/1.log" Feb 17 15:10:57.982609 master-0 kubenswrapper[8018]: I0217 15:10:57.982411 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:57.982609 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:57.982609 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:57.982609 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:57.982609 master-0 kubenswrapper[8018]: I0217 15:10:57.982526 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:58.982383 master-0 kubenswrapper[8018]: I0217 15:10:58.982283 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:58.982383 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:58.982383 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:58.982383 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:58.983360 master-0 kubenswrapper[8018]: I0217 15:10:58.982380 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:10:59.817920 master-0 kubenswrapper[8018]: I0217 15:10:59.817827 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:10:59.817920 master-0 kubenswrapper[8018]: I0217 15:10:59.817942 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:10:59.818354 master-0 kubenswrapper[8018]: E0217 15:10:59.818066 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:10:59.818354 master-0 kubenswrapper[8018]: E0217 15:10:59.818107 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:10:59.818354 master-0 kubenswrapper[8018]: E0217 15:10:59.818155 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:12:03.818131093 +0000 UTC m=+556.570474183 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:10:59.818354 master-0 kubenswrapper[8018]: E0217 15:10:59.818182 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:03.818168954 +0000 UTC m=+496.570512044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:10:59.982401 master-0 kubenswrapper[8018]: I0217 15:10:59.982335 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:10:59.982401 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:10:59.982401 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:10:59.982401 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:10:59.983052 master-0 kubenswrapper[8018]: I0217 15:10:59.982421 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:00.326895 master-0 kubenswrapper[8018]: I0217 15:11:00.326822 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:11:00.327165 master-0 kubenswrapper[8018]: E0217 15:11:00.326965 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:11:00.327165 master-0 kubenswrapper[8018]: E0217 15:11:00.327021 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:12:04.327006937 +0000 UTC m=+557.079349987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:11:00.982941 master-0 kubenswrapper[8018]: I0217 15:11:00.982859 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:00.982941 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:00.982941 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:00.982941 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:00.984033 master-0 kubenswrapper[8018]: I0217 15:11:00.982947 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:01.982870 master-0 kubenswrapper[8018]: I0217 15:11:01.982802 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:01.982870 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:01.982870 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:01.982870 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:01.983654 master-0 kubenswrapper[8018]: I0217 15:11:01.982887 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:02.982404 master-0 kubenswrapper[8018]: I0217 15:11:02.982326 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:02.982404 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:02.982404 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:02.982404 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:02.982761 master-0 kubenswrapper[8018]: I0217 15:11:02.982421 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:03.441302 master-0 kubenswrapper[8018]: I0217 15:11:03.441176 8018 scope.go:117] "RemoveContainer" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" Feb 17 15:11:03.442203 master-0 kubenswrapper[8018]: E0217 15:11:03.441551 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:11:03.886004 master-0 kubenswrapper[8018]: I0217 15:11:03.885911 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:11:03.886303 master-0 kubenswrapper[8018]: E0217 15:11:03.886164 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:11:03.886303 master-0 kubenswrapper[8018]: E0217 15:11:03.886295 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:11.886273883 +0000 UTC m=+504.638616933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:11:03.982960 master-0 kubenswrapper[8018]: I0217 15:11:03.982854 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:03.982960 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:03.982960 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:03.982960 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:03.983425 master-0 kubenswrapper[8018]: I0217 15:11:03.982989 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:04.982748 master-0 kubenswrapper[8018]: I0217 15:11:04.982658 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:04.982748 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:04.982748 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:04.982748 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:04.983741 master-0 kubenswrapper[8018]: I0217 15:11:04.982773 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:05.982919 master-0 kubenswrapper[8018]: I0217 15:11:05.982837 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:05.982919 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:05.982919 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:05.982919 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:05.984054 master-0 kubenswrapper[8018]: I0217 15:11:05.982940 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:06.988773 master-0 kubenswrapper[8018]: I0217 15:11:06.982421 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:06.988773 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:06.988773 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:06.988773 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:06.988773 master-0 kubenswrapper[8018]: I0217 15:11:06.982514 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:07.463250 master-0 kubenswrapper[8018]: I0217 15:11:07.463123 8018 scope.go:117] "RemoveContainer" containerID="4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8" Feb 17 15:11:07.957175 master-0 kubenswrapper[8018]: I0217 15:11:07.957112 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/1.log" Feb 17 15:11:07.957534 master-0 kubenswrapper[8018]: I0217 15:11:07.957485 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084"} Feb 17 15:11:07.981807 master-0 kubenswrapper[8018]: I0217 15:11:07.981635 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:07.981807 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:07.981807 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:07.981807 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:07.981807 master-0 kubenswrapper[8018]: I0217 15:11:07.981708 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:08.982921 master-0 kubenswrapper[8018]: I0217 15:11:08.982835 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:08.982921 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:08.982921 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:08.982921 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:08.983887 master-0 kubenswrapper[8018]: I0217 15:11:08.982925 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:09.982673 master-0 kubenswrapper[8018]: I0217 15:11:09.982583 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:09.982673 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:09.982673 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:09.982673 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:09.983740 master-0 kubenswrapper[8018]: I0217 15:11:09.982706 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:10.982446 master-0 kubenswrapper[8018]: I0217 15:11:10.982348 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:10.982446 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:10.982446 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:10.982446 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:10.982847 master-0 kubenswrapper[8018]: I0217 15:11:10.982494 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:11.914254 master-0 kubenswrapper[8018]: I0217 15:11:11.914113 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:11:11.915546 master-0 kubenswrapper[8018]: E0217 15:11:11.914362 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:11:11.915546 master-0 kubenswrapper[8018]: E0217 15:11:11.914539 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:27.914503149 +0000 UTC m=+520.666846279 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:11:11.982590 master-0 kubenswrapper[8018]: I0217 15:11:11.982544 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:11.982590 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:11.982590 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:11.982590 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:11.983093 master-0 kubenswrapper[8018]: I0217 15:11:11.983050 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:12.983415 master-0 kubenswrapper[8018]: I0217 15:11:12.983322 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:12.983415 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:12.983415 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:12.983415 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:12.984805 master-0 kubenswrapper[8018]: I0217 15:11:12.984700 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:13.983144 master-0 kubenswrapper[8018]: I0217 15:11:13.983048 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:13.983144 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:13.983144 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:13.983144 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:13.984205 master-0 kubenswrapper[8018]: I0217 15:11:13.983150 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:14.982062 master-0 kubenswrapper[8018]: I0217 15:11:14.981969 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:14.982062 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:14.982062 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:14.982062 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:14.982557 master-0 kubenswrapper[8018]: I0217 15:11:14.982063 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:15.982765 master-0 kubenswrapper[8018]: I0217 15:11:15.982673 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:15.982765 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:15.982765 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:15.982765 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:15.983434 master-0 kubenswrapper[8018]: I0217 15:11:15.982779 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:16.983508 master-0 kubenswrapper[8018]: I0217 15:11:16.983388 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:16.983508 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:16.983508 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:16.983508 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:16.983508 master-0 kubenswrapper[8018]: I0217 15:11:16.983503 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:17.445880 master-0 kubenswrapper[8018]: I0217 15:11:17.445801 8018 scope.go:117] "RemoveContainer" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" Feb 17 15:11:17.446324 master-0 kubenswrapper[8018]: E0217 15:11:17.446131 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:11:17.983215 master-0 kubenswrapper[8018]: I0217 15:11:17.983038 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:17.983215 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:17.983215 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:17.983215 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:17.983215 master-0 kubenswrapper[8018]: I0217 15:11:17.983153 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:18.983111 master-0 kubenswrapper[8018]: I0217 15:11:18.983030 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:18.983111 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:18.983111 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:18.983111 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:18.983534 master-0 kubenswrapper[8018]: I0217 15:11:18.983116 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:19.984188 master-0 kubenswrapper[8018]: I0217 15:11:19.984090 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:19.984188 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:19.984188 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:19.984188 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:19.984188 master-0 kubenswrapper[8018]: I0217 15:11:19.984179 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:20.982929 master-0 kubenswrapper[8018]: I0217 15:11:20.982814 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:20.982929 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:20.982929 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:20.982929 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:20.983628 master-0 kubenswrapper[8018]: I0217 15:11:20.982943 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:21.982350 master-0 kubenswrapper[8018]: I0217 15:11:21.982258 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:21.982350 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:21.982350 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:21.982350 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:21.984254 master-0 kubenswrapper[8018]: I0217 15:11:21.983570 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:22.982656 master-0 kubenswrapper[8018]: I0217 15:11:22.982433 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:22.982656 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:22.982656 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:22.982656 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:22.982656 master-0 kubenswrapper[8018]: I0217 15:11:22.982541 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:23.983358 master-0 kubenswrapper[8018]: I0217 15:11:23.983240 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:23.983358 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:23.983358 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:23.983358 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:23.984393 master-0 kubenswrapper[8018]: I0217 15:11:23.984344 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:24.982714 master-0 kubenswrapper[8018]: I0217 15:11:24.982619 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:24.982714 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:24.982714 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:24.982714 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:24.982974 master-0 kubenswrapper[8018]: I0217 15:11:24.982761 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:25.983101 master-0 kubenswrapper[8018]: I0217 15:11:25.983026 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:25.983101 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:25.983101 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:25.983101 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:25.984235 master-0 kubenswrapper[8018]: I0217 15:11:25.983122 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:26.982925 master-0 kubenswrapper[8018]: I0217 15:11:26.982849 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:26.982925 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:26.982925 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:26.982925 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:26.983479 master-0 kubenswrapper[8018]: I0217 15:11:26.982957 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:27.959953 master-0 kubenswrapper[8018]: I0217 15:11:27.959880 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:11:27.960187 master-0 kubenswrapper[8018]: E0217 15:11:27.960138 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:11:27.960305 master-0 kubenswrapper[8018]: E0217 15:11:27.960261 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:11:59.960231549 +0000 UTC m=+552.712574639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:11:27.982838 master-0 kubenswrapper[8018]: I0217 15:11:27.982706 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:27.982838 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:27.982838 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:27.982838 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:27.982838 master-0 kubenswrapper[8018]: I0217 15:11:27.982817 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:28.982451 master-0 kubenswrapper[8018]: I0217 15:11:28.982316 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:28.982451 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:28.982451 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:28.982451 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:28.982451 master-0 kubenswrapper[8018]: I0217 15:11:28.982432 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:29.984222 master-0 kubenswrapper[8018]: I0217 15:11:29.984107 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:29.984222 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:29.984222 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:29.984222 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:29.984222 master-0 kubenswrapper[8018]: I0217 15:11:29.984200 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:30.440987 master-0 kubenswrapper[8018]: I0217 15:11:30.440879 8018 scope.go:117] "RemoveContainer" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" Feb 17 15:11:30.984931 master-0 kubenswrapper[8018]: I0217 15:11:30.984839 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:30.984931 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:30.984931 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:30.984931 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:30.985507 master-0 kubenswrapper[8018]: I0217 15:11:30.984950 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:31.122055 master-0 kubenswrapper[8018]: I0217 15:11:31.121971 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/4.log" Feb 17 15:11:31.123109 master-0 kubenswrapper[8018]: I0217 15:11:31.123042 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/3.log" Feb 17 15:11:31.124562 master-0 kubenswrapper[8018]: I0217 15:11:31.124504 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5"} Feb 17 15:11:31.124723 master-0 kubenswrapper[8018]: I0217 15:11:31.124501 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" exitCode=1 Feb 17 15:11:31.124723 master-0 kubenswrapper[8018]: I0217 15:11:31.124582 8018 scope.go:117] "RemoveContainer" containerID="606702a575e1ee90e684dca084119dc95412eed58f966f94ef4e00d4013c8904" Feb 17 15:11:31.125240 master-0 kubenswrapper[8018]: I0217 15:11:31.125174 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:11:31.125566 master-0 kubenswrapper[8018]: E0217 15:11:31.125510 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:11:31.983251 master-0 kubenswrapper[8018]: I0217 15:11:31.983170 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:31.983251 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:31.983251 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:31.983251 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:31.983778 master-0 kubenswrapper[8018]: I0217 15:11:31.983258 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:32.134033 master-0 kubenswrapper[8018]: I0217 15:11:32.133964 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/4.log" Feb 17 15:11:32.983965 master-0 kubenswrapper[8018]: I0217 15:11:32.983925 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:32.983965 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:32.983965 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:32.983965 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:32.984330 master-0 kubenswrapper[8018]: I0217 15:11:32.984296 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:33.982709 master-0 kubenswrapper[8018]: I0217 15:11:33.982649 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:33.982709 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:33.982709 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:33.982709 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:33.984049 master-0 kubenswrapper[8018]: I0217 15:11:33.983996 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:34.982402 master-0 kubenswrapper[8018]: I0217 15:11:34.982320 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:34.982402 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:34.982402 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:34.982402 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:34.983702 master-0 kubenswrapper[8018]: I0217 15:11:34.982413 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:35.982644 master-0 kubenswrapper[8018]: I0217 15:11:35.982582 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:35.982644 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:35.982644 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:35.982644 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:35.983511 master-0 kubenswrapper[8018]: I0217 15:11:35.982667 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:36.984020 master-0 kubenswrapper[8018]: I0217 15:11:36.983920 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:36.984020 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:36.984020 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:36.984020 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:36.985180 master-0 kubenswrapper[8018]: I0217 15:11:36.984042 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:37.984025 master-0 kubenswrapper[8018]: I0217 15:11:37.983932 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:37.984025 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:37.984025 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:37.984025 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:37.984697 master-0 kubenswrapper[8018]: I0217 15:11:37.984065 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:38.983090 master-0 kubenswrapper[8018]: I0217 15:11:38.983004 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:38.983090 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:38.983090 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:38.983090 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:38.983562 master-0 kubenswrapper[8018]: I0217 15:11:38.983104 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:39.417254 master-0 kubenswrapper[8018]: E0217 15:11:39.417116 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" podUID="c97d328c-95b6-4511-aa90-531ab42b9653" Feb 17 15:11:39.982742 master-0 kubenswrapper[8018]: I0217 15:11:39.982653 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:39.982742 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:39.982742 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:39.982742 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:39.983182 master-0 kubenswrapper[8018]: I0217 15:11:39.982773 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:40.190480 master-0 kubenswrapper[8018]: I0217 15:11:40.190395 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:11:40.780782 master-0 kubenswrapper[8018]: E0217 15:11:40.780684 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" podUID="6b7d1adb-b23b-4702-be7d-27e818e8fd63" Feb 17 15:11:40.982813 master-0 kubenswrapper[8018]: I0217 15:11:40.982740 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:40.982813 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:40.982813 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:40.982813 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:40.983190 master-0 kubenswrapper[8018]: I0217 15:11:40.982821 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:41.196085 master-0 kubenswrapper[8018]: I0217 15:11:41.196000 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:11:41.440872 master-0 kubenswrapper[8018]: I0217 15:11:41.440780 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:11:41.441263 master-0 kubenswrapper[8018]: E0217 15:11:41.441196 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:11:41.982098 master-0 kubenswrapper[8018]: I0217 15:11:41.982019 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:41.982098 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:41.982098 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:41.982098 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:41.982098 master-0 kubenswrapper[8018]: I0217 15:11:41.982088 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:42.225588 master-0 kubenswrapper[8018]: E0217 15:11:42.225398 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" podUID="c8646e5c-c2ce-48e6-b757-58044769f479" Feb 17 15:11:42.982374 master-0 kubenswrapper[8018]: I0217 15:11:42.982319 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:42.982374 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:42.982374 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:42.982374 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:42.982907 master-0 kubenswrapper[8018]: I0217 15:11:42.982403 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:43.209928 master-0 kubenswrapper[8018]: I0217 15:11:43.209840 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:11:43.982263 master-0 kubenswrapper[8018]: I0217 15:11:43.982167 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:43.982263 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:43.982263 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:43.982263 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:43.983259 master-0 kubenswrapper[8018]: I0217 15:11:43.982303 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:44.415337 master-0 kubenswrapper[8018]: I0217 15:11:44.415203 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:11:44.415700 master-0 kubenswrapper[8018]: E0217 15:11:44.415489 8018 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 17 15:11:44.415700 master-0 kubenswrapper[8018]: E0217 15:11:44.415626 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:13:46.415585385 +0000 UTC m=+659.167928525 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : secret "cloud-credential-operator-serving-cert" not found Feb 17 15:11:44.982819 master-0 kubenswrapper[8018]: I0217 15:11:44.982732 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:44.982819 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:44.982819 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:44.982819 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:44.983807 master-0 kubenswrapper[8018]: I0217 15:11:44.982821 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:45.043382 master-0 kubenswrapper[8018]: E0217 15:11:45.043114 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" podUID="655e4000-0ad4-4349-8c31-e0c952e4be30" Feb 17 15:11:45.223620 master-0 kubenswrapper[8018]: I0217 15:11:45.223547 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:11:45.732157 master-0 kubenswrapper[8018]: I0217 15:11:45.732089 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:11:45.732393 master-0 kubenswrapper[8018]: E0217 15:11:45.732272 8018 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Feb 17 15:11:45.732393 master-0 kubenswrapper[8018]: E0217 15:11:45.732337 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:13:47.732323104 +0000 UTC m=+660.484666144 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : secret "samples-operator-tls" not found Feb 17 15:11:45.982865 master-0 kubenswrapper[8018]: I0217 15:11:45.982703 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:45.982865 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:45.982865 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:45.982865 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:45.982865 master-0 kubenswrapper[8018]: I0217 15:11:45.982794 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:46.983406 master-0 kubenswrapper[8018]: I0217 15:11:46.983182 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:46.983406 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:46.983406 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:46.983406 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:46.983406 master-0 kubenswrapper[8018]: I0217 15:11:46.983305 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:47.255532 master-0 kubenswrapper[8018]: I0217 15:11:47.254845 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:11:47.255532 master-0 kubenswrapper[8018]: E0217 15:11:47.255034 8018 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Feb 17 15:11:47.255532 master-0 kubenswrapper[8018]: E0217 15:11:47.255095 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:13:49.25507679 +0000 UTC m=+662.007419850 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : secret "cluster-autoscaler-operator-cert" not found Feb 17 15:11:47.983157 master-0 kubenswrapper[8018]: I0217 15:11:47.983113 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:47.983157 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:47.983157 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:47.983157 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:47.983552 master-0 kubenswrapper[8018]: I0217 15:11:47.983522 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:48.983429 master-0 kubenswrapper[8018]: I0217 15:11:48.983297 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:48.983429 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:48.983429 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:48.983429 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:48.983429 master-0 kubenswrapper[8018]: I0217 15:11:48.983394 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:49.983094 master-0 kubenswrapper[8018]: I0217 15:11:49.983015 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:49.983094 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:49.983094 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:49.983094 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:49.983671 master-0 kubenswrapper[8018]: I0217 15:11:49.983121 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:50.097871 master-0 kubenswrapper[8018]: I0217 15:11:50.097820 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:11:50.098548 master-0 kubenswrapper[8018]: E0217 15:11:50.098020 8018 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 17 15:11:50.098633 master-0 kubenswrapper[8018]: E0217 15:11:50.098610 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:13:52.098591773 +0000 UTC m=+664.850934823 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : secret "machine-api-operator-tls" not found Feb 17 15:11:50.983922 master-0 kubenswrapper[8018]: I0217 15:11:50.983834 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:50.983922 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:50.983922 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:50.983922 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:50.984422 master-0 kubenswrapper[8018]: I0217 15:11:50.983943 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:51.982834 master-0 kubenswrapper[8018]: I0217 15:11:51.982745 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:51.982834 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:51.982834 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:51.982834 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:51.984009 master-0 kubenswrapper[8018]: I0217 15:11:51.982855 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:52.983611 master-0 kubenswrapper[8018]: I0217 15:11:52.983523 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:52.983611 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:52.983611 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:52.983611 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:52.984619 master-0 kubenswrapper[8018]: I0217 15:11:52.983626 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:53.441292 master-0 kubenswrapper[8018]: I0217 15:11:53.441157 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:11:53.441554 master-0 kubenswrapper[8018]: E0217 15:11:53.441433 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:11:53.982933 master-0 kubenswrapper[8018]: I0217 15:11:53.982822 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:53.982933 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:53.982933 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:53.982933 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:53.983533 master-0 kubenswrapper[8018]: I0217 15:11:53.982928 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:54.982568 master-0 kubenswrapper[8018]: I0217 15:11:54.982500 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:54.982568 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:54.982568 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:54.982568 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:54.983109 master-0 kubenswrapper[8018]: I0217 15:11:54.982570 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:55.983997 master-0 kubenswrapper[8018]: I0217 15:11:55.983860 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:55.983997 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:55.983997 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:55.983997 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:55.984779 master-0 kubenswrapper[8018]: I0217 15:11:55.984013 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:56.985283 master-0 kubenswrapper[8018]: I0217 15:11:56.985193 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:56.985283 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:56.985283 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:56.985283 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:56.985283 master-0 kubenswrapper[8018]: I0217 15:11:56.985257 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:57.982273 master-0 kubenswrapper[8018]: I0217 15:11:57.982180 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:57.982273 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:57.982273 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:57.982273 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:57.982895 master-0 kubenswrapper[8018]: I0217 15:11:57.982309 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:58.908174 master-0 kubenswrapper[8018]: E0217 15:11:58.908079 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" podUID="784b804f-6bcf-4cbd-a19e-9b1fa244354e" Feb 17 15:11:58.983034 master-0 kubenswrapper[8018]: I0217 15:11:58.982925 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:58.983034 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:58.983034 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:58.983034 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:58.983388 master-0 kubenswrapper[8018]: I0217 15:11:58.983045 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:11:59.475939 master-0 kubenswrapper[8018]: E0217 15:11:59.475874 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" podUID="76d3da23-3347-4a5c-b328-d92671897ecc" Feb 17 15:11:59.625794 master-0 kubenswrapper[8018]: I0217 15:11:59.625724 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:11:59.982726 master-0 kubenswrapper[8018]: I0217 15:11:59.982658 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:11:59.982726 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:11:59.982726 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:11:59.982726 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:11:59.983287 master-0 kubenswrapper[8018]: I0217 15:11:59.982764 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:00.044608 master-0 kubenswrapper[8018]: I0217 15:12:00.044529 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:12:00.044816 master-0 kubenswrapper[8018]: E0217 15:12:00.044762 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:12:00.044892 master-0 kubenswrapper[8018]: E0217 15:12:00.044865 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:13:04.044845539 +0000 UTC m=+616.797188589 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:12:00.982348 master-0 kubenswrapper[8018]: I0217 15:12:00.982267 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:00.982348 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:00.982348 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:00.982348 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:00.982655 master-0 kubenswrapper[8018]: I0217 15:12:00.982367 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:01.981734 master-0 kubenswrapper[8018]: I0217 15:12:01.981676 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:01.981734 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:01.981734 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:01.981734 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:01.981734 master-0 kubenswrapper[8018]: I0217 15:12:01.981734 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:02.982665 master-0 kubenswrapper[8018]: I0217 15:12:02.982599 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:02.982665 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:02.982665 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:02.982665 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:02.983448 master-0 kubenswrapper[8018]: I0217 15:12:02.982677 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:03.901689 master-0 kubenswrapper[8018]: I0217 15:12:03.901576 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:12:03.902024 master-0 kubenswrapper[8018]: E0217 15:12:03.901807 8018 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 15:12:03.902024 master-0 kubenswrapper[8018]: E0217 15:12:03.901930 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:14:05.901899367 +0000 UTC m=+678.654242487 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : secret "prometheus-operator-tls" not found Feb 17 15:12:03.983191 master-0 kubenswrapper[8018]: I0217 15:12:03.983109 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:03.983191 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:03.983191 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:03.983191 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:03.983191 master-0 kubenswrapper[8018]: I0217 15:12:03.983197 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:03.984839 master-0 kubenswrapper[8018]: I0217 15:12:03.983257 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:12:03.984839 master-0 kubenswrapper[8018]: I0217 15:12:03.983947 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591"} pod="openshift-ingress/router-default-864ddd5f56-g8w2f" containerMessage="Container router failed startup probe, will be restarted" Feb 17 15:12:03.984839 master-0 kubenswrapper[8018]: I0217 15:12:03.983985 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" containerID="cri-o://fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" gracePeriod=3600 Feb 17 15:12:04.409430 master-0 kubenswrapper[8018]: I0217 15:12:04.409332 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:12:04.409780 master-0 kubenswrapper[8018]: E0217 15:12:04.409601 8018 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Feb 17 15:12:04.409780 master-0 kubenswrapper[8018]: E0217 15:12:04.409716 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:14:06.409694399 +0000 UTC m=+679.162037459 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : secret "machine-approver-tls" not found Feb 17 15:12:04.440571 master-0 kubenswrapper[8018]: I0217 15:12:04.440489 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:04.440890 master-0 kubenswrapper[8018]: E0217 15:12:04.440830 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:12:11.439807 master-0 kubenswrapper[8018]: I0217 15:12:11.439711 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:12:18.440103 master-0 kubenswrapper[8018]: I0217 15:12:18.440012 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:18.441073 master-0 kubenswrapper[8018]: E0217 15:12:18.440336 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:12:31.448347 master-0 kubenswrapper[8018]: I0217 15:12:31.448223 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:31.449606 master-0 kubenswrapper[8018]: E0217 15:12:31.448581 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:12:42.440235 master-0 kubenswrapper[8018]: I0217 15:12:42.440172 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:42.440791 master-0 kubenswrapper[8018]: E0217 15:12:42.440385 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:12:51.013393 master-0 kubenswrapper[8018]: I0217 15:12:51.013304 8018 generic.go:334] "Generic (PLEG): container finished" podID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerID="fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" exitCode=0 Feb 17 15:12:51.014342 master-0 kubenswrapper[8018]: I0217 15:12:51.013394 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" event={"ID":"a2d6e329-7ad8-4fc2-accc-66827f11743d","Type":"ContainerDied","Data":"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591"} Feb 17 15:12:51.014342 master-0 kubenswrapper[8018]: I0217 15:12:51.013495 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" event={"ID":"a2d6e329-7ad8-4fc2-accc-66827f11743d","Type":"ContainerStarted","Data":"860736c555e36eb357d7747028619f7c30730d9978a45e3a5c0a43cdd4bd9ba8"} Feb 17 15:12:51.980192 master-0 kubenswrapper[8018]: I0217 15:12:51.980113 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:12:51.983150 master-0 kubenswrapper[8018]: I0217 15:12:51.983107 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:51.983150 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:51.983150 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:51.983150 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:51.983422 master-0 kubenswrapper[8018]: I0217 15:12:51.983158 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:52.979811 master-0 kubenswrapper[8018]: I0217 15:12:52.979644 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:12:52.983630 master-0 kubenswrapper[8018]: I0217 15:12:52.983546 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:52.983630 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:52.983630 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:52.983630 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:52.983937 master-0 kubenswrapper[8018]: I0217 15:12:52.983662 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:53.982429 master-0 kubenswrapper[8018]: I0217 15:12:53.982330 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:53.982429 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:53.982429 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:53.982429 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:53.982429 master-0 kubenswrapper[8018]: I0217 15:12:53.982422 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:54.982595 master-0 kubenswrapper[8018]: I0217 15:12:54.982490 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:54.982595 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:54.982595 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:54.982595 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:54.983102 master-0 kubenswrapper[8018]: I0217 15:12:54.982630 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:55.441500 master-0 kubenswrapper[8018]: I0217 15:12:55.441398 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:55.982361 master-0 kubenswrapper[8018]: I0217 15:12:55.982165 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:55.982361 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:55.982361 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:55.982361 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:55.982361 master-0 kubenswrapper[8018]: I0217 15:12:55.982257 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:56.049113 master-0 kubenswrapper[8018]: I0217 15:12:56.049024 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:12:56.049845 master-0 kubenswrapper[8018]: I0217 15:12:56.049793 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/4.log" Feb 17 15:12:56.050710 master-0 kubenswrapper[8018]: I0217 15:12:56.050658 8018 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" exitCode=1 Feb 17 15:12:56.050833 master-0 kubenswrapper[8018]: I0217 15:12:56.050716 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade"} Feb 17 15:12:56.050833 master-0 kubenswrapper[8018]: I0217 15:12:56.050776 8018 scope.go:117] "RemoveContainer" containerID="36d3aa8f19faee69f9ce38df854debd1313449b154725531c3907dad73a2c4a5" Feb 17 15:12:56.051526 master-0 kubenswrapper[8018]: I0217 15:12:56.051431 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:12:56.051827 master-0 kubenswrapper[8018]: E0217 15:12:56.051775 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:12:56.983207 master-0 kubenswrapper[8018]: I0217 15:12:56.983072 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:56.983207 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:56.983207 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:56.983207 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:56.983207 master-0 kubenswrapper[8018]: I0217 15:12:56.983153 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:57.065213 master-0 kubenswrapper[8018]: I0217 15:12:57.065125 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:12:57.983721 master-0 kubenswrapper[8018]: I0217 15:12:57.983650 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:57.983721 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:57.983721 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:57.983721 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:57.984414 master-0 kubenswrapper[8018]: I0217 15:12:57.983729 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:58.983444 master-0 kubenswrapper[8018]: I0217 15:12:58.983324 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:58.983444 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:58.983444 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:58.983444 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:58.983444 master-0 kubenswrapper[8018]: I0217 15:12:58.983443 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:12:59.102653 master-0 kubenswrapper[8018]: E0217 15:12:59.102555 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-6bhf8" podUID="6d56f334-6c7b-4c92-9665-56300d44f9a3" Feb 17 15:12:59.982700 master-0 kubenswrapper[8018]: I0217 15:12:59.982608 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:12:59.982700 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:12:59.982700 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:12:59.982700 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:12:59.983661 master-0 kubenswrapper[8018]: I0217 15:12:59.982702 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:00.097849 master-0 kubenswrapper[8018]: I0217 15:13:00.097796 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:13:00.983583 master-0 kubenswrapper[8018]: I0217 15:13:00.983503 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:00.983583 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:00.983583 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:00.983583 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:00.984010 master-0 kubenswrapper[8018]: I0217 15:13:00.983597 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:01.983508 master-0 kubenswrapper[8018]: I0217 15:13:01.983381 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:01.983508 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:01.983508 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:01.983508 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:01.983508 master-0 kubenswrapper[8018]: I0217 15:13:01.983499 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:02.982696 master-0 kubenswrapper[8018]: I0217 15:13:02.982627 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:02.982696 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:02.982696 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:02.982696 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:02.982982 master-0 kubenswrapper[8018]: I0217 15:13:02.982724 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:03.983897 master-0 kubenswrapper[8018]: I0217 15:13:03.983825 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:03.983897 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:03.983897 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:03.983897 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:03.984985 master-0 kubenswrapper[8018]: I0217 15:13:03.983920 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:04.088396 master-0 kubenswrapper[8018]: I0217 15:13:04.088289 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:13:04.088824 master-0 kubenswrapper[8018]: E0217 15:13:04.088773 8018 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 15:13:04.088919 master-0 kubenswrapper[8018]: E0217 15:13:04.088869 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:06.08884616 +0000 UTC m=+738.841189250 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : secret "canary-serving-cert" not found Feb 17 15:13:04.982393 master-0 kubenswrapper[8018]: I0217 15:13:04.982328 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:04.982393 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:04.982393 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:04.982393 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:04.982793 master-0 kubenswrapper[8018]: I0217 15:13:04.982407 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:05.982423 master-0 kubenswrapper[8018]: I0217 15:13:05.982312 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:05.982423 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:05.982423 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:05.982423 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:05.983815 master-0 kubenswrapper[8018]: I0217 15:13:05.982428 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:06.982315 master-0 kubenswrapper[8018]: I0217 15:13:06.982205 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:06.982315 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:06.982315 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:06.982315 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:06.982315 master-0 kubenswrapper[8018]: I0217 15:13:06.982296 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:07.445297 master-0 kubenswrapper[8018]: I0217 15:13:07.445196 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:13:07.445786 master-0 kubenswrapper[8018]: E0217 15:13:07.445720 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:13:07.983380 master-0 kubenswrapper[8018]: I0217 15:13:07.983277 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:07.983380 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:07.983380 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:07.983380 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:07.983380 master-0 kubenswrapper[8018]: I0217 15:13:07.983388 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:08.983768 master-0 kubenswrapper[8018]: I0217 15:13:08.983682 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:08.983768 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:08.983768 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:08.983768 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:08.984714 master-0 kubenswrapper[8018]: I0217 15:13:08.983794 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:09.167564 master-0 kubenswrapper[8018]: I0217 15:13:09.167502 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/2.log" Feb 17 15:13:09.168590 master-0 kubenswrapper[8018]: I0217 15:13:09.168540 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/1.log" Feb 17 15:13:09.169029 master-0 kubenswrapper[8018]: I0217 15:13:09.168992 8018 generic.go:334] "Generic (PLEG): container finished" podID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" containerID="bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084" exitCode=1 Feb 17 15:13:09.169086 master-0 kubenswrapper[8018]: I0217 15:13:09.169048 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerDied","Data":"bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084"} Feb 17 15:13:09.169167 master-0 kubenswrapper[8018]: I0217 15:13:09.169138 8018 scope.go:117] "RemoveContainer" containerID="4f4889e4fc034bdf89049f32d3bbe8147db247c0bdabc918e6164722403d46c8" Feb 17 15:13:09.169957 master-0 kubenswrapper[8018]: I0217 15:13:09.169926 8018 scope.go:117] "RemoveContainer" containerID="bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084" Feb 17 15:13:09.170259 master-0 kubenswrapper[8018]: E0217 15:13:09.170225 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nclxg_openshift-ingress-operator(22a30079-d7fc-49cf-882e-1c5022cb5bf6)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" Feb 17 15:13:09.982316 master-0 kubenswrapper[8018]: I0217 15:13:09.982240 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:09.982316 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:09.982316 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:09.982316 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:09.982316 master-0 kubenswrapper[8018]: I0217 15:13:09.982301 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:10.177393 master-0 kubenswrapper[8018]: I0217 15:13:10.177337 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/2.log" Feb 17 15:13:10.982666 master-0 kubenswrapper[8018]: I0217 15:13:10.982570 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:10.982666 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:10.982666 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:10.982666 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:10.983106 master-0 kubenswrapper[8018]: I0217 15:13:10.982678 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:11.982733 master-0 kubenswrapper[8018]: I0217 15:13:11.982648 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:11.982733 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:11.982733 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:11.982733 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:11.984006 master-0 kubenswrapper[8018]: I0217 15:13:11.982750 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:12.983890 master-0 kubenswrapper[8018]: I0217 15:13:12.983794 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:12.983890 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:12.983890 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:12.983890 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:12.983890 master-0 kubenswrapper[8018]: I0217 15:13:12.983876 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:13.983371 master-0 kubenswrapper[8018]: I0217 15:13:13.983316 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:13.983371 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:13.983371 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:13.983371 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:13.983947 master-0 kubenswrapper[8018]: I0217 15:13:13.983904 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:14.982514 master-0 kubenswrapper[8018]: I0217 15:13:14.982399 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:14.982514 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:14.982514 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:14.982514 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:14.982987 master-0 kubenswrapper[8018]: I0217 15:13:14.982578 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:15.983490 master-0 kubenswrapper[8018]: I0217 15:13:15.983332 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:15.983490 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:15.983490 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:15.983490 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:15.983490 master-0 kubenswrapper[8018]: I0217 15:13:15.983481 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:16.982890 master-0 kubenswrapper[8018]: I0217 15:13:16.982804 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:16.982890 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:16.982890 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:16.982890 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:16.983529 master-0 kubenswrapper[8018]: I0217 15:13:16.982903 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:17.982767 master-0 kubenswrapper[8018]: I0217 15:13:17.982678 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:17.982767 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:17.982767 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:17.982767 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:17.982767 master-0 kubenswrapper[8018]: I0217 15:13:17.982763 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:18.440381 master-0 kubenswrapper[8018]: I0217 15:13:18.440267 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:13:18.440381 master-0 kubenswrapper[8018]: E0217 15:13:18.440497 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:13:18.982804 master-0 kubenswrapper[8018]: I0217 15:13:18.982675 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:18.982804 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:18.982804 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:18.982804 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:18.983807 master-0 kubenswrapper[8018]: I0217 15:13:18.982809 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:19.982327 master-0 kubenswrapper[8018]: I0217 15:13:19.982234 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:19.982327 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:19.982327 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:19.982327 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:19.982648 master-0 kubenswrapper[8018]: I0217 15:13:19.982362 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:20.440989 master-0 kubenswrapper[8018]: I0217 15:13:20.440538 8018 scope.go:117] "RemoveContainer" containerID="bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084" Feb 17 15:13:20.440989 master-0 kubenswrapper[8018]: E0217 15:13:20.440941 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nclxg_openshift-ingress-operator(22a30079-d7fc-49cf-882e-1c5022cb5bf6)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" Feb 17 15:13:20.983478 master-0 kubenswrapper[8018]: I0217 15:13:20.983332 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:20.983478 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:20.983478 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:20.983478 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:20.983478 master-0 kubenswrapper[8018]: I0217 15:13:20.983438 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:21.983238 master-0 kubenswrapper[8018]: I0217 15:13:21.983136 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:21.983238 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:21.983238 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:21.983238 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:21.984072 master-0 kubenswrapper[8018]: I0217 15:13:21.983279 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:22.983301 master-0 kubenswrapper[8018]: I0217 15:13:22.983171 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:22.983301 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:22.983301 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:22.983301 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:22.984408 master-0 kubenswrapper[8018]: I0217 15:13:22.983297 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:23.982823 master-0 kubenswrapper[8018]: I0217 15:13:23.982721 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:23.982823 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:23.982823 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:23.982823 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:23.983306 master-0 kubenswrapper[8018]: I0217 15:13:23.982821 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:24.983349 master-0 kubenswrapper[8018]: I0217 15:13:24.983272 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:24.983349 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:24.983349 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:24.983349 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:24.984099 master-0 kubenswrapper[8018]: I0217 15:13:24.983346 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:25.982336 master-0 kubenswrapper[8018]: I0217 15:13:25.982250 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:25.982336 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:25.982336 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:25.982336 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:25.982788 master-0 kubenswrapper[8018]: I0217 15:13:25.982339 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:26.982259 master-0 kubenswrapper[8018]: I0217 15:13:26.982146 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:26.982259 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:26.982259 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:26.982259 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:26.982259 master-0 kubenswrapper[8018]: I0217 15:13:26.982241 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:27.981641 master-0 kubenswrapper[8018]: I0217 15:13:27.981547 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:27.981641 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:27.981641 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:27.981641 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:27.982045 master-0 kubenswrapper[8018]: I0217 15:13:27.981655 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:28.983397 master-0 kubenswrapper[8018]: I0217 15:13:28.983327 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:28.983397 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:28.983397 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:28.983397 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:28.984705 master-0 kubenswrapper[8018]: I0217 15:13:28.984636 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:29.982912 master-0 kubenswrapper[8018]: I0217 15:13:29.982854 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:29.982912 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:29.982912 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:29.982912 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:29.984195 master-0 kubenswrapper[8018]: I0217 15:13:29.984151 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:30.983746 master-0 kubenswrapper[8018]: I0217 15:13:30.983658 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:30.983746 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:30.983746 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:30.983746 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:30.984382 master-0 kubenswrapper[8018]: I0217 15:13:30.983744 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:31.440510 master-0 kubenswrapper[8018]: I0217 15:13:31.440427 8018 scope.go:117] "RemoveContainer" containerID="bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084" Feb 17 15:13:31.983132 master-0 kubenswrapper[8018]: I0217 15:13:31.983037 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:31.983132 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:31.983132 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:31.983132 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:31.983132 master-0 kubenswrapper[8018]: I0217 15:13:31.983125 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:32.345306 master-0 kubenswrapper[8018]: I0217 15:13:32.345235 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/2.log" Feb 17 15:13:32.346385 master-0 kubenswrapper[8018]: I0217 15:13:32.346337 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23"} Feb 17 15:13:32.443914 master-0 kubenswrapper[8018]: I0217 15:13:32.443855 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:13:32.444154 master-0 kubenswrapper[8018]: E0217 15:13:32.444047 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:13:32.982712 master-0 kubenswrapper[8018]: I0217 15:13:32.982619 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:32.982712 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:32.982712 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:32.982712 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:32.983166 master-0 kubenswrapper[8018]: I0217 15:13:32.982726 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:33.982271 master-0 kubenswrapper[8018]: I0217 15:13:33.982194 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:33.982271 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:33.982271 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:33.982271 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:33.982271 master-0 kubenswrapper[8018]: I0217 15:13:33.982262 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:34.793481 master-0 kubenswrapper[8018]: I0217 15:13:34.793394 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kl9jm"] Feb 17 15:13:34.794286 master-0 kubenswrapper[8018]: I0217 15:13:34.794229 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.796224 master-0 kubenswrapper[8018]: I0217 15:13:34.796180 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 17 15:13:34.797378 master-0 kubenswrapper[8018]: I0217 15:13:34.797335 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-tsxrr" Feb 17 15:13:34.893202 master-0 kubenswrapper[8018]: I0217 15:13:34.893137 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.893438 master-0 kubenswrapper[8018]: I0217 15:13:34.893379 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4kds\" (UniqueName: \"kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.893646 master-0 kubenswrapper[8018]: I0217 15:13:34.893618 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.893769 master-0 kubenswrapper[8018]: I0217 15:13:34.893740 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.983113 master-0 kubenswrapper[8018]: I0217 15:13:34.982982 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:34.983113 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:34.983113 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:34.983113 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:34.983113 master-0 kubenswrapper[8018]: I0217 15:13:34.983097 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:34.994876 master-0 kubenswrapper[8018]: I0217 15:13:34.994783 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4kds\" (UniqueName: \"kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.995040 master-0 kubenswrapper[8018]: I0217 15:13:34.994989 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.995290 master-0 kubenswrapper[8018]: I0217 15:13:34.995225 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.995290 master-0 kubenswrapper[8018]: I0217 15:13:34.995259 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.995446 master-0 kubenswrapper[8018]: I0217 15:13:34.995431 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.996022 master-0 kubenswrapper[8018]: I0217 15:13:34.995965 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:34.996477 master-0 kubenswrapper[8018]: I0217 15:13:34.996372 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:35.020610 master-0 kubenswrapper[8018]: I0217 15:13:35.020500 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4kds\" (UniqueName: \"kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds\") pod \"cni-sysctl-allowlist-ds-kl9jm\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:35.110606 master-0 kubenswrapper[8018]: I0217 15:13:35.110375 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:35.375209 master-0 kubenswrapper[8018]: I0217 15:13:35.375007 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" event={"ID":"9501c813-f993-4916-94fc-878138ac027b","Type":"ContainerStarted","Data":"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c"} Feb 17 15:13:35.375209 master-0 kubenswrapper[8018]: I0217 15:13:35.375142 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" event={"ID":"9501c813-f993-4916-94fc-878138ac027b","Type":"ContainerStarted","Data":"55124cc48e2f04c2d0d41148bb59c0218a57f0f39885408f846b0eadf7dec65c"} Feb 17 15:13:35.375767 master-0 kubenswrapper[8018]: I0217 15:13:35.375358 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:35.399639 master-0 kubenswrapper[8018]: I0217 15:13:35.399528 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" podStartSLOduration=1.39941374 podStartE2EDuration="1.39941374s" podCreationTimestamp="2026-02-17 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:13:35.392943304 +0000 UTC m=+648.145286364" watchObservedRunningTime="2026-02-17 15:13:35.39941374 +0000 UTC m=+648.151756830" Feb 17 15:13:35.983303 master-0 kubenswrapper[8018]: I0217 15:13:35.983193 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:35.983303 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:35.983303 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:35.983303 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:35.984243 master-0 kubenswrapper[8018]: I0217 15:13:35.983300 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:36.413754 master-0 kubenswrapper[8018]: I0217 15:13:36.413690 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:13:36.796857 master-0 kubenswrapper[8018]: I0217 15:13:36.796696 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kl9jm"] Feb 17 15:13:36.982737 master-0 kubenswrapper[8018]: I0217 15:13:36.982619 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:36.982737 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:36.982737 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:36.982737 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:36.982737 master-0 kubenswrapper[8018]: I0217 15:13:36.982717 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:37.983644 master-0 kubenswrapper[8018]: I0217 15:13:37.983517 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:37.983644 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:37.983644 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:37.983644 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:37.984560 master-0 kubenswrapper[8018]: I0217 15:13:37.983646 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:38.397213 master-0 kubenswrapper[8018]: I0217 15:13:38.397141 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" gracePeriod=30 Feb 17 15:13:38.984315 master-0 kubenswrapper[8018]: I0217 15:13:38.984212 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:38.984315 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:38.984315 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:38.984315 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:38.984315 master-0 kubenswrapper[8018]: I0217 15:13:38.984309 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:39.982912 master-0 kubenswrapper[8018]: I0217 15:13:39.982829 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:39.982912 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:39.982912 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:39.982912 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:39.982912 master-0 kubenswrapper[8018]: I0217 15:13:39.982906 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:40.981934 master-0 kubenswrapper[8018]: I0217 15:13:40.981808 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:40.981934 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:40.981934 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:40.981934 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:40.983244 master-0 kubenswrapper[8018]: I0217 15:13:40.981950 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:41.983581 master-0 kubenswrapper[8018]: I0217 15:13:41.983479 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:41.983581 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:41.983581 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:41.983581 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:41.983581 master-0 kubenswrapper[8018]: I0217 15:13:41.983579 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:42.983855 master-0 kubenswrapper[8018]: I0217 15:13:42.983773 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:42.983855 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:42.983855 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:42.983855 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:42.983855 master-0 kubenswrapper[8018]: I0217 15:13:42.983863 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:43.192348 master-0 kubenswrapper[8018]: E0217 15:13:43.192228 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" podUID="c97d328c-95b6-4511-aa90-531ab42b9653" Feb 17 15:13:43.432345 master-0 kubenswrapper[8018]: I0217 15:13:43.432304 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:13:43.985689 master-0 kubenswrapper[8018]: I0217 15:13:43.985101 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:43.985689 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:43.985689 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:43.985689 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:43.985689 master-0 kubenswrapper[8018]: I0217 15:13:43.985202 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:44.198226 master-0 kubenswrapper[8018]: E0217 15:13:44.198106 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" podUID="6b7d1adb-b23b-4702-be7d-27e818e8fd63" Feb 17 15:13:44.372528 master-0 kubenswrapper[8018]: I0217 15:13:44.372439 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:13:44.373685 master-0 kubenswrapper[8018]: I0217 15:13:44.373656 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.375123 master-0 kubenswrapper[8018]: I0217 15:13:44.375090 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-4gx6p" Feb 17 15:13:44.384291 master-0 kubenswrapper[8018]: I0217 15:13:44.384253 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:13:44.438512 master-0 kubenswrapper[8018]: I0217 15:13:44.438160 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:13:44.445347 master-0 kubenswrapper[8018]: I0217 15:13:44.443590 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.445347 master-0 kubenswrapper[8018]: I0217 15:13:44.443636 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.545487 master-0 kubenswrapper[8018]: I0217 15:13:44.545353 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.546092 master-0 kubenswrapper[8018]: I0217 15:13:44.546053 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.549139 master-0 kubenswrapper[8018]: I0217 15:13:44.549103 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.562974 master-0 kubenswrapper[8018]: I0217 15:13:44.562914 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.695166 master-0 kubenswrapper[8018]: I0217 15:13:44.694993 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:13:44.983256 master-0 kubenswrapper[8018]: I0217 15:13:44.983113 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:44.983256 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:44.983256 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:44.983256 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:44.983256 master-0 kubenswrapper[8018]: I0217 15:13:44.983215 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:45.114742 master-0 kubenswrapper[8018]: E0217 15:13:45.114648 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:45.117014 master-0 kubenswrapper[8018]: E0217 15:13:45.116917 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:45.119107 master-0 kubenswrapper[8018]: E0217 15:13:45.119040 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:45.119214 master-0 kubenswrapper[8018]: E0217 15:13:45.119118 8018 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:13:45.164942 master-0 kubenswrapper[8018]: I0217 15:13:45.164881 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:13:45.445773 master-0 kubenswrapper[8018]: I0217 15:13:45.445704 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerStarted","Data":"c8d059fa01ecdc001c9f81953a0f611eee0abc7b2a9ab48cb6c12f655da8d5ed"} Feb 17 15:13:45.445773 master-0 kubenswrapper[8018]: I0217 15:13:45.445750 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerStarted","Data":"79cd9922eddeda66f86396279d7c2d92bdfdde5d55f7ab9b86712ce128d7d382"} Feb 17 15:13:45.986105 master-0 kubenswrapper[8018]: I0217 15:13:45.985995 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:45.986105 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:45.986105 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:45.986105 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:45.986105 master-0 kubenswrapper[8018]: I0217 15:13:45.986066 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:46.211201 master-0 kubenswrapper[8018]: E0217 15:13:46.211131 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" podUID="c8646e5c-c2ce-48e6-b757-58044769f479" Feb 17 15:13:46.439553 master-0 kubenswrapper[8018]: I0217 15:13:46.439448 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:13:46.439879 master-0 kubenswrapper[8018]: E0217 15:13:46.439707 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:13:46.469138 master-0 kubenswrapper[8018]: I0217 15:13:46.469063 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:13:46.469138 master-0 kubenswrapper[8018]: I0217 15:13:46.469088 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerStarted","Data":"45dbd4ea79e43e686a9c5871ae5c59474bfc1abca00581679dc4b7c55fb07d49"} Feb 17 15:13:46.479321 master-0 kubenswrapper[8018]: I0217 15:13:46.479256 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:13:46.485062 master-0 kubenswrapper[8018]: I0217 15:13:46.484985 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:13:46.497291 master-0 kubenswrapper[8018]: I0217 15:13:46.497184 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" podStartSLOduration=2.497164302 podStartE2EDuration="2.497164302s" podCreationTimestamp="2026-02-17 15:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:13:46.495434488 +0000 UTC m=+659.247777548" watchObservedRunningTime="2026-02-17 15:13:46.497164302 +0000 UTC m=+659.249507362" Feb 17 15:13:46.530079 master-0 kubenswrapper[8018]: I0217 15:13:46.529997 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:13:46.530303 master-0 kubenswrapper[8018]: I0217 15:13:46.530242 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="multus-admission-controller" containerID="cri-o://d03b5b01eebc01049f52508b9cb6557295a244f02f7925b66faf26d4de1e8764" gracePeriod=30 Feb 17 15:13:46.530412 master-0 kubenswrapper[8018]: I0217 15:13:46.530375 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="kube-rbac-proxy" containerID="cri-o://58400ac8b210abe6d74d057999272a3e2cdb3a6a4ce0fbdbf1173716a460becc" gracePeriod=30 Feb 17 15:13:46.736961 master-0 kubenswrapper[8018]: I0217 15:13:46.736853 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-kcv7p" Feb 17 15:13:46.743290 master-0 kubenswrapper[8018]: I0217 15:13:46.743248 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:13:46.981737 master-0 kubenswrapper[8018]: I0217 15:13:46.981667 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:46.981737 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:46.981737 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:46.981737 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:46.981737 master-0 kubenswrapper[8018]: I0217 15:13:46.981739 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:47.143908 master-0 kubenswrapper[8018]: I0217 15:13:47.143843 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc"] Feb 17 15:13:47.477640 master-0 kubenswrapper[8018]: I0217 15:13:47.477501 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" event={"ID":"c97d328c-95b6-4511-aa90-531ab42b9653","Type":"ContainerStarted","Data":"c388df3d1b10996f4fe92a5802dbdf00a1159ddd9a0ce29347567fcbd8371e4e"} Feb 17 15:13:47.477640 master-0 kubenswrapper[8018]: I0217 15:13:47.477582 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" event={"ID":"c97d328c-95b6-4511-aa90-531ab42b9653","Type":"ContainerStarted","Data":"c15c55254b60eef4e6f082f6ebb85ff7cc6e3f7a7f4e7b7ce280e5a616be4326"} Feb 17 15:13:47.479654 master-0 kubenswrapper[8018]: I0217 15:13:47.479607 8018 generic.go:334] "Generic (PLEG): container finished" podID="6b25a72d-965f-415c-abc9-09612859e9e0" containerID="58400ac8b210abe6d74d057999272a3e2cdb3a6a4ce0fbdbf1173716a460becc" exitCode=0 Feb 17 15:13:47.479733 master-0 kubenswrapper[8018]: I0217 15:13:47.479660 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerDied","Data":"58400ac8b210abe6d74d057999272a3e2cdb3a6a4ce0fbdbf1173716a460becc"} Feb 17 15:13:47.801623 master-0 kubenswrapper[8018]: I0217 15:13:47.801484 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:13:47.804656 master-0 kubenswrapper[8018]: I0217 15:13:47.804586 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:13:47.983859 master-0 kubenswrapper[8018]: I0217 15:13:47.983785 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:47.983859 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:47.983859 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:47.983859 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:47.983859 master-0 kubenswrapper[8018]: I0217 15:13:47.983849 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:48.041093 master-0 kubenswrapper[8018]: I0217 15:13:48.041037 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtqvr" Feb 17 15:13:48.049369 master-0 kubenswrapper[8018]: I0217 15:13:48.049313 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:13:48.225178 master-0 kubenswrapper[8018]: E0217 15:13:48.225123 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" podUID="655e4000-0ad4-4349-8c31-e0c952e4be30" Feb 17 15:13:48.471831 master-0 kubenswrapper[8018]: I0217 15:13:48.471775 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4"] Feb 17 15:13:48.488247 master-0 kubenswrapper[8018]: I0217 15:13:48.488198 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:13:48.983734 master-0 kubenswrapper[8018]: I0217 15:13:48.983667 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:48.983734 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:48.983734 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:48.983734 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:48.983734 master-0 kubenswrapper[8018]: I0217 15:13:48.983732 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:49.326084 master-0 kubenswrapper[8018]: I0217 15:13:49.325957 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:13:49.330514 master-0 kubenswrapper[8018]: I0217 15:13:49.330474 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:13:49.472490 master-0 kubenswrapper[8018]: I0217 15:13:49.472432 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4h7qp" Feb 17 15:13:49.480847 master-0 kubenswrapper[8018]: I0217 15:13:49.480799 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:13:49.493243 master-0 kubenswrapper[8018]: I0217 15:13:49.493186 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" event={"ID":"6b7d1adb-b23b-4702-be7d-27e818e8fd63","Type":"ContainerStarted","Data":"0760e00b932363042782ba956e380d806e3d87e24d2f82f4acd8b411bacdc365"} Feb 17 15:13:49.590545 master-0 kubenswrapper[8018]: I0217 15:13:49.590364 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:13:49.591264 master-0 kubenswrapper[8018]: I0217 15:13:49.591223 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.594175 master-0 kubenswrapper[8018]: I0217 15:13:49.594028 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 17 15:13:49.594175 master-0 kubenswrapper[8018]: I0217 15:13:49.594075 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-qt5n5" Feb 17 15:13:49.600643 master-0 kubenswrapper[8018]: I0217 15:13:49.600583 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:13:49.731180 master-0 kubenswrapper[8018]: I0217 15:13:49.731073 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.731180 master-0 kubenswrapper[8018]: I0217 15:13:49.731182 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.731666 master-0 kubenswrapper[8018]: I0217 15:13:49.731289 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.832075 master-0 kubenswrapper[8018]: I0217 15:13:49.832037 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.832270 master-0 kubenswrapper[8018]: I0217 15:13:49.832166 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.832270 master-0 kubenswrapper[8018]: I0217 15:13:49.832209 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.832418 master-0 kubenswrapper[8018]: I0217 15:13:49.832381 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.832483 master-0 kubenswrapper[8018]: I0217 15:13:49.832414 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.848281 master-0 kubenswrapper[8018]: I0217 15:13:49.847817 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.919934 master-0 kubenswrapper[8018]: I0217 15:13:49.919874 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:13:49.948183 master-0 kubenswrapper[8018]: I0217 15:13:49.948141 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr"] Feb 17 15:13:49.962527 master-0 kubenswrapper[8018]: W0217 15:13:49.962483 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8646e5c_c2ce_48e6_b757_58044769f479.slice/crio-afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974 WatchSource:0}: Error finding container afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974: Status 404 returned error can't find the container with id afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974 Feb 17 15:13:49.986377 master-0 kubenswrapper[8018]: I0217 15:13:49.986313 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:49.986377 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:49.986377 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:49.986377 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:49.987068 master-0 kubenswrapper[8018]: I0217 15:13:49.986390 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:50.354698 master-0 kubenswrapper[8018]: I0217 15:13:50.354584 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:13:50.515848 master-0 kubenswrapper[8018]: I0217 15:13:50.515740 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" event={"ID":"c8646e5c-c2ce-48e6-b757-58044769f479","Type":"ContainerStarted","Data":"c4bf046ec13f4bbdd9fa14d8a1603ffc4d3fec773e58987a2d5dcf3342751600"} Feb 17 15:13:50.516374 master-0 kubenswrapper[8018]: I0217 15:13:50.515857 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" event={"ID":"c8646e5c-c2ce-48e6-b757-58044769f479","Type":"ContainerStarted","Data":"afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974"} Feb 17 15:13:50.711651 master-0 kubenswrapper[8018]: W0217 15:13:50.711598 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee3c8b34_0581_45d6_a8ff_3959d5651eba.slice/crio-01b9ebc25f2991ff17b90c43ce2febf6d4d9453e3d9f6e9c0161bb2a2c624c42 WatchSource:0}: Error finding container 01b9ebc25f2991ff17b90c43ce2febf6d4d9453e3d9f6e9c0161bb2a2c624c42: Status 404 returned error can't find the container with id 01b9ebc25f2991ff17b90c43ce2febf6d4d9453e3d9f6e9c0161bb2a2c624c42 Feb 17 15:13:50.982721 master-0 kubenswrapper[8018]: I0217 15:13:50.982643 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:50.982721 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:50.982721 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:50.982721 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:50.982721 master-0 kubenswrapper[8018]: I0217 15:13:50.982709 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:51.526068 master-0 kubenswrapper[8018]: I0217 15:13:51.526011 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" event={"ID":"6b7d1adb-b23b-4702-be7d-27e818e8fd63","Type":"ContainerStarted","Data":"be95c04741ec1cf5113238a3d52d165ba0307c3efaa3d04402cd5e6a8f8b3ae7"} Feb 17 15:13:51.526068 master-0 kubenswrapper[8018]: I0217 15:13:51.526060 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" event={"ID":"6b7d1adb-b23b-4702-be7d-27e818e8fd63","Type":"ContainerStarted","Data":"1244bb714380b134ae9ae976f2f3273ed572e2e36cd2ea116a8750f282c67a0e"} Feb 17 15:13:51.527690 master-0 kubenswrapper[8018]: I0217 15:13:51.527652 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ee3c8b34-0581-45d6-a8ff-3959d5651eba","Type":"ContainerStarted","Data":"e7703c1f1874a39177afc2874af2b734f2df8f64b07f87c0d3a00c3a8993072f"} Feb 17 15:13:51.527690 master-0 kubenswrapper[8018]: I0217 15:13:51.527684 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ee3c8b34-0581-45d6-a8ff-3959d5651eba","Type":"ContainerStarted","Data":"01b9ebc25f2991ff17b90c43ce2febf6d4d9453e3d9f6e9c0161bb2a2c624c42"} Feb 17 15:13:51.553148 master-0 kubenswrapper[8018]: I0217 15:13:51.551323 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" podStartSLOduration=252.394469095 podStartE2EDuration="4m14.551294745s" podCreationTimestamp="2026-02-17 15:09:37 +0000 UTC" firstStartedPulling="2026-02-17 15:13:48.597689541 +0000 UTC m=+661.350032601" lastFinishedPulling="2026-02-17 15:13:50.754515201 +0000 UTC m=+663.506858251" observedRunningTime="2026-02-17 15:13:51.549844668 +0000 UTC m=+664.302187788" watchObservedRunningTime="2026-02-17 15:13:51.551294745 +0000 UTC m=+664.303637795" Feb 17 15:13:51.576069 master-0 kubenswrapper[8018]: I0217 15:13:51.576004 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.575988137 podStartE2EDuration="2.575988137s" podCreationTimestamp="2026-02-17 15:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:13:51.57570698 +0000 UTC m=+664.328050020" watchObservedRunningTime="2026-02-17 15:13:51.575988137 +0000 UTC m=+664.328331177" Feb 17 15:13:51.982757 master-0 kubenswrapper[8018]: I0217 15:13:51.982705 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:51.982757 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:51.982757 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:51.982757 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:51.983110 master-0 kubenswrapper[8018]: I0217 15:13:51.982763 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:52.168220 master-0 kubenswrapper[8018]: I0217 15:13:52.168125 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:13:52.171314 master-0 kubenswrapper[8018]: I0217 15:13:52.171265 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:13:52.390476 master-0 kubenswrapper[8018]: I0217 15:13:52.390311 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t9g75" Feb 17 15:13:52.399281 master-0 kubenswrapper[8018]: I0217 15:13:52.399232 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:13:52.989567 master-0 kubenswrapper[8018]: I0217 15:13:52.987039 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:52.989567 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:52.989567 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:52.989567 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:52.989567 master-0 kubenswrapper[8018]: I0217 15:13:52.987101 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:53.981953 master-0 kubenswrapper[8018]: I0217 15:13:53.981727 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:53.981953 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:53.981953 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:53.981953 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:53.982538 master-0 kubenswrapper[8018]: I0217 15:13:53.982295 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:54.551846 master-0 kubenswrapper[8018]: I0217 15:13:54.551696 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" event={"ID":"c8646e5c-c2ce-48e6-b757-58044769f479","Type":"ContainerStarted","Data":"da1858700d4dd348bd1bd6965ebad759d727564f2555dd6372efe783d1762809"} Feb 17 15:13:54.556104 master-0 kubenswrapper[8018]: I0217 15:13:54.555284 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" event={"ID":"c97d328c-95b6-4511-aa90-531ab42b9653","Type":"ContainerStarted","Data":"eac7810e63e39b854e1c16b4c3a8efd314bc8ba25306e76c49cd7325f9e050a2"} Feb 17 15:13:54.574224 master-0 kubenswrapper[8018]: I0217 15:13:54.574131 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" podStartSLOduration=251.465565638 podStartE2EDuration="4m15.574107981s" podCreationTimestamp="2026-02-17 15:09:39 +0000 UTC" firstStartedPulling="2026-02-17 15:13:50.104703896 +0000 UTC m=+662.857046946" lastFinishedPulling="2026-02-17 15:13:54.213246239 +0000 UTC m=+666.965589289" observedRunningTime="2026-02-17 15:13:54.573098606 +0000 UTC m=+667.325441686" watchObservedRunningTime="2026-02-17 15:13:54.574107981 +0000 UTC m=+667.326451051" Feb 17 15:13:54.609257 master-0 kubenswrapper[8018]: I0217 15:13:54.609167 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" podStartSLOduration=251.668609001 podStartE2EDuration="4m18.609146037s" podCreationTimestamp="2026-02-17 15:09:36 +0000 UTC" firstStartedPulling="2026-02-17 15:13:47.314147683 +0000 UTC m=+660.066490733" lastFinishedPulling="2026-02-17 15:13:54.254684719 +0000 UTC m=+667.007027769" observedRunningTime="2026-02-17 15:13:54.605993867 +0000 UTC m=+667.358336937" watchObservedRunningTime="2026-02-17 15:13:54.609146037 +0000 UTC m=+667.361489097" Feb 17 15:13:54.653476 master-0 kubenswrapper[8018]: I0217 15:13:54.650584 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz"] Feb 17 15:13:54.982078 master-0 kubenswrapper[8018]: I0217 15:13:54.981972 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:54.982078 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:54.982078 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:54.982078 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:54.982392 master-0 kubenswrapper[8018]: I0217 15:13:54.982179 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:55.113804 master-0 kubenswrapper[8018]: E0217 15:13:55.113535 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:55.114934 master-0 kubenswrapper[8018]: E0217 15:13:55.114868 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:55.116941 master-0 kubenswrapper[8018]: E0217 15:13:55.116854 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:13:55.117054 master-0 kubenswrapper[8018]: E0217 15:13:55.116963 8018 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:13:55.565777 master-0 kubenswrapper[8018]: I0217 15:13:55.565531 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" event={"ID":"655e4000-0ad4-4349-8c31-e0c952e4be30","Type":"ContainerStarted","Data":"218df8f0b52821823d8254d204feb9b95b5a09ccab1492be657b97414660a369"} Feb 17 15:13:55.566419 master-0 kubenswrapper[8018]: I0217 15:13:55.565811 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" event={"ID":"655e4000-0ad4-4349-8c31-e0c952e4be30","Type":"ContainerStarted","Data":"a592584f1d491ed515603e4859ea07fdb301bfabbc222443eff56b510fc57717"} Feb 17 15:13:55.981787 master-0 kubenswrapper[8018]: I0217 15:13:55.981715 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:55.981787 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:55.981787 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:55.981787 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:55.981787 master-0 kubenswrapper[8018]: I0217 15:13:55.981817 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:56.982560 master-0 kubenswrapper[8018]: I0217 15:13:56.982494 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:56.982560 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:56.982560 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:56.982560 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:56.982560 master-0 kubenswrapper[8018]: I0217 15:13:56.982558 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:57.789432 master-0 kubenswrapper[8018]: I0217 15:13:57.789360 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:13:57.789815 master-0 kubenswrapper[8018]: I0217 15:13:57.789741 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-4-master-0" podUID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" containerName="installer" containerID="cri-o://e7703c1f1874a39177afc2874af2b734f2df8f64b07f87c0d3a00c3a8993072f" gracePeriod=30 Feb 17 15:13:57.982574 master-0 kubenswrapper[8018]: I0217 15:13:57.982490 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:57.982574 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:57.982574 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:57.982574 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:57.982574 master-0 kubenswrapper[8018]: I0217 15:13:57.982551 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:58.983925 master-0 kubenswrapper[8018]: I0217 15:13:58.983800 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:58.983925 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:58.983925 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:58.983925 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:58.985549 master-0 kubenswrapper[8018]: I0217 15:13:58.983981 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:13:59.440147 master-0 kubenswrapper[8018]: I0217 15:13:59.440050 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:13:59.440638 master-0 kubenswrapper[8018]: E0217 15:13:59.440569 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:13:59.782973 master-0 kubenswrapper[8018]: I0217 15:13:59.782858 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 17 15:13:59.784098 master-0 kubenswrapper[8018]: I0217 15:13:59.784027 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:13:59.855995 master-0 kubenswrapper[8018]: I0217 15:13:59.855935 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 17 15:13:59.901880 master-0 kubenswrapper[8018]: I0217 15:13:59.901830 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:13:59.902077 master-0 kubenswrapper[8018]: I0217 15:13:59.901889 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:13:59.902077 master-0 kubenswrapper[8018]: I0217 15:13:59.901930 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:13:59.982354 master-0 kubenswrapper[8018]: I0217 15:13:59.982266 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:13:59.982354 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:13:59.982354 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:13:59.982354 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:13:59.982753 master-0 kubenswrapper[8018]: I0217 15:13:59.982373 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:00.003100 master-0 kubenswrapper[8018]: I0217 15:14:00.003049 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.003505 master-0 kubenswrapper[8018]: I0217 15:14:00.003130 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.003505 master-0 kubenswrapper[8018]: I0217 15:14:00.003255 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.003691 master-0 kubenswrapper[8018]: I0217 15:14:00.003639 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.004014 master-0 kubenswrapper[8018]: I0217 15:14:00.003967 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.019440 master-0 kubenswrapper[8018]: I0217 15:14:00.019393 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access\") pod \"installer-5-master-0\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.171723 master-0 kubenswrapper[8018]: I0217 15:14:00.171666 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:00.983212 master-0 kubenswrapper[8018]: I0217 15:14:00.983131 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:00.983212 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:00.983212 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:00.983212 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:00.983551 master-0 kubenswrapper[8018]: I0217 15:14:00.983249 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:01.982489 master-0 kubenswrapper[8018]: I0217 15:14:01.982319 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:01.982489 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:01.982489 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:01.982489 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:01.982489 master-0 kubenswrapper[8018]: I0217 15:14:01.982410 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:02.552690 master-0 kubenswrapper[8018]: I0217 15:14:02.552526 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 17 15:14:02.620562 master-0 kubenswrapper[8018]: I0217 15:14:02.619404 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" event={"ID":"655e4000-0ad4-4349-8c31-e0c952e4be30","Type":"ContainerStarted","Data":"a17a8feb8cde32d9f769f1d063cb256b0434b87c2646d32dfbbaf8c558e68235"} Feb 17 15:14:02.622438 master-0 kubenswrapper[8018]: I0217 15:14:02.622378 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"69b452fc-5e99-4947-a722-e47a602ac144","Type":"ContainerStarted","Data":"4ee1ada2125277c0b6cce472a26bd7b393be00724a19ccb2e1067f7f0c7cb926"} Feb 17 15:14:02.627103 master-0 kubenswrapper[8018]: E0217 15:14:02.627038 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" podUID="784b804f-6bcf-4cbd-a19e-9b1fa244354e" Feb 17 15:14:02.689183 master-0 kubenswrapper[8018]: I0217 15:14:02.688976 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" podStartSLOduration=254.346966103 podStartE2EDuration="4m21.68894867s" podCreationTimestamp="2026-02-17 15:09:41 +0000 UTC" firstStartedPulling="2026-02-17 15:13:54.851364394 +0000 UTC m=+667.603707444" lastFinishedPulling="2026-02-17 15:14:02.193346921 +0000 UTC m=+674.945690011" observedRunningTime="2026-02-17 15:14:02.648664649 +0000 UTC m=+675.401007739" watchObservedRunningTime="2026-02-17 15:14:02.68894867 +0000 UTC m=+675.441291720" Feb 17 15:14:02.983364 master-0 kubenswrapper[8018]: I0217 15:14:02.983286 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:02.983364 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:02.983364 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:02.983364 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:02.984256 master-0 kubenswrapper[8018]: I0217 15:14:02.984215 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:03.632616 master-0 kubenswrapper[8018]: I0217 15:14:03.632439 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"69b452fc-5e99-4947-a722-e47a602ac144","Type":"ContainerStarted","Data":"6b14f00d7fcb44fb3296b9acab65074a4551627d03279119eef48d40dd8b3ddd"} Feb 17 15:14:03.632616 master-0 kubenswrapper[8018]: I0217 15:14:03.632522 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:14:03.659190 master-0 kubenswrapper[8018]: I0217 15:14:03.659097 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=4.65907654 podStartE2EDuration="4.65907654s" podCreationTimestamp="2026-02-17 15:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:14:03.656112554 +0000 UTC m=+676.408455634" watchObservedRunningTime="2026-02-17 15:14:03.65907654 +0000 UTC m=+676.411419620" Feb 17 15:14:03.982523 master-0 kubenswrapper[8018]: I0217 15:14:03.982300 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:03.982523 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:03.982523 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:03.982523 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:03.982523 master-0 kubenswrapper[8018]: I0217 15:14:03.982397 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:04.983651 master-0 kubenswrapper[8018]: I0217 15:14:04.983578 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:04.983651 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:04.983651 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:04.983651 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:04.983651 master-0 kubenswrapper[8018]: I0217 15:14:04.983679 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:05.114560 master-0 kubenswrapper[8018]: E0217 15:14:05.114362 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:14:05.116542 master-0 kubenswrapper[8018]: E0217 15:14:05.116474 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:14:05.118977 master-0 kubenswrapper[8018]: E0217 15:14:05.118873 8018 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:14:05.119168 master-0 kubenswrapper[8018]: E0217 15:14:05.118990 8018 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:14:05.907592 master-0 kubenswrapper[8018]: I0217 15:14:05.907502 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:14:05.913322 master-0 kubenswrapper[8018]: I0217 15:14:05.913219 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:14:05.983166 master-0 kubenswrapper[8018]: I0217 15:14:05.983088 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:05.983166 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:05.983166 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:05.983166 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:05.983716 master-0 kubenswrapper[8018]: I0217 15:14:05.983192 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:06.036720 master-0 kubenswrapper[8018]: I0217 15:14:06.036671 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8gftr" Feb 17 15:14:06.045283 master-0 kubenswrapper[8018]: I0217 15:14:06.045224 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:14:06.428922 master-0 kubenswrapper[8018]: I0217 15:14:06.428766 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:14:06.433902 master-0 kubenswrapper[8018]: I0217 15:14:06.433835 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:14:06.617265 master-0 kubenswrapper[8018]: I0217 15:14:06.617159 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-nzz2j"] Feb 17 15:14:06.628221 master-0 kubenswrapper[8018]: W0217 15:14:06.628100 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod784b804f_6bcf_4cbd_a19e_9b1fa244354e.slice/crio-a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a WatchSource:0}: Error finding container a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a: Status 404 returned error can't find the container with id a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a Feb 17 15:14:06.637853 master-0 kubenswrapper[8018]: I0217 15:14:06.637706 8018 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:14:06.644054 master-0 kubenswrapper[8018]: I0217 15:14:06.643944 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:14:06.651783 master-0 kubenswrapper[8018]: I0217 15:14:06.651714 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:14:06.659347 master-0 kubenswrapper[8018]: I0217 15:14:06.659287 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" event={"ID":"784b804f-6bcf-4cbd-a19e-9b1fa244354e","Type":"ContainerStarted","Data":"a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a"} Feb 17 15:14:06.690492 master-0 kubenswrapper[8018]: W0217 15:14:06.690332 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d3da23_3347_4a5c_b328_d92671897ecc.slice/crio-aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45 WatchSource:0}: Error finding container aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45: Status 404 returned error can't find the container with id aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45 Feb 17 15:14:06.982843 master-0 kubenswrapper[8018]: I0217 15:14:06.982755 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:06.982843 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:06.982843 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:06.982843 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:06.983167 master-0 kubenswrapper[8018]: I0217 15:14:06.982876 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:07.674754 master-0 kubenswrapper[8018]: I0217 15:14:07.674665 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" event={"ID":"76d3da23-3347-4a5c-b328-d92671897ecc","Type":"ContainerStarted","Data":"9c22c03ba38290b4b67da5589986faeba4e958e5e4647732342d046d26b6522c"} Feb 17 15:14:07.674754 master-0 kubenswrapper[8018]: I0217 15:14:07.674758 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" event={"ID":"76d3da23-3347-4a5c-b328-d92671897ecc","Type":"ContainerStarted","Data":"aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45"} Feb 17 15:14:07.983772 master-0 kubenswrapper[8018]: I0217 15:14:07.983638 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:07.983772 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:07.983772 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:07.983772 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:07.983772 master-0 kubenswrapper[8018]: I0217 15:14:07.983720 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:08.532568 master-0 kubenswrapper[8018]: I0217 15:14:08.532528 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kl9jm_9501c813-f993-4916-94fc-878138ac027b/kube-multus-additional-cni-plugins/0.log" Feb 17 15:14:08.532738 master-0 kubenswrapper[8018]: I0217 15:14:08.532614 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:14:08.667030 master-0 kubenswrapper[8018]: I0217 15:14:08.666982 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist\") pod \"9501c813-f993-4916-94fc-878138ac027b\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " Feb 17 15:14:08.667191 master-0 kubenswrapper[8018]: I0217 15:14:08.667051 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir\") pod \"9501c813-f993-4916-94fc-878138ac027b\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " Feb 17 15:14:08.667191 master-0 kubenswrapper[8018]: I0217 15:14:08.667156 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready\") pod \"9501c813-f993-4916-94fc-878138ac027b\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " Feb 17 15:14:08.667285 master-0 kubenswrapper[8018]: I0217 15:14:08.667198 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4kds\" (UniqueName: \"kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds\") pod \"9501c813-f993-4916-94fc-878138ac027b\" (UID: \"9501c813-f993-4916-94fc-878138ac027b\") " Feb 17 15:14:08.667421 master-0 kubenswrapper[8018]: I0217 15:14:08.667373 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "9501c813-f993-4916-94fc-878138ac027b" (UID: "9501c813-f993-4916-94fc-878138ac027b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:08.667665 master-0 kubenswrapper[8018]: I0217 15:14:08.667642 8018 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9501c813-f993-4916-94fc-878138ac027b-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:08.667714 master-0 kubenswrapper[8018]: I0217 15:14:08.667675 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready" (OuterVolumeSpecName: "ready") pod "9501c813-f993-4916-94fc-878138ac027b" (UID: "9501c813-f993-4916-94fc-878138ac027b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:14:08.667865 master-0 kubenswrapper[8018]: I0217 15:14:08.667842 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "9501c813-f993-4916-94fc-878138ac027b" (UID: "9501c813-f993-4916-94fc-878138ac027b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:14:08.670296 master-0 kubenswrapper[8018]: I0217 15:14:08.670270 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds" (OuterVolumeSpecName: "kube-api-access-m4kds") pod "9501c813-f993-4916-94fc-878138ac027b" (UID: "9501c813-f993-4916-94fc-878138ac027b"). InnerVolumeSpecName "kube-api-access-m4kds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:08.686292 master-0 kubenswrapper[8018]: I0217 15:14:08.686243 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kl9jm_9501c813-f993-4916-94fc-878138ac027b/kube-multus-additional-cni-plugins/0.log" Feb 17 15:14:08.686668 master-0 kubenswrapper[8018]: I0217 15:14:08.686303 8018 generic.go:334] "Generic (PLEG): container finished" podID="9501c813-f993-4916-94fc-878138ac027b" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" exitCode=137 Feb 17 15:14:08.686668 master-0 kubenswrapper[8018]: I0217 15:14:08.686371 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" Feb 17 15:14:08.686668 master-0 kubenswrapper[8018]: I0217 15:14:08.686378 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" event={"ID":"9501c813-f993-4916-94fc-878138ac027b","Type":"ContainerDied","Data":"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c"} Feb 17 15:14:08.686668 master-0 kubenswrapper[8018]: I0217 15:14:08.686411 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kl9jm" event={"ID":"9501c813-f993-4916-94fc-878138ac027b","Type":"ContainerDied","Data":"55124cc48e2f04c2d0d41148bb59c0218a57f0f39885408f846b0eadf7dec65c"} Feb 17 15:14:08.686668 master-0 kubenswrapper[8018]: I0217 15:14:08.686431 8018 scope.go:117] "RemoveContainer" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" Feb 17 15:14:08.689719 master-0 kubenswrapper[8018]: I0217 15:14:08.689637 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" event={"ID":"784b804f-6bcf-4cbd-a19e-9b1fa244354e","Type":"ContainerStarted","Data":"bcd77bf16a25e4a932e6852ad2e4f2e9aa302d4a4fb790b3d162b280f42758af"} Feb 17 15:14:08.689719 master-0 kubenswrapper[8018]: I0217 15:14:08.689708 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" event={"ID":"784b804f-6bcf-4cbd-a19e-9b1fa244354e","Type":"ContainerStarted","Data":"b1d38382c0d002897aef2c49c1d47d53f6f7fac74bb3b857f307ec96233929b5"} Feb 17 15:14:08.711833 master-0 kubenswrapper[8018]: I0217 15:14:08.711748 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" podStartSLOduration=252.219728165 podStartE2EDuration="4m13.711723485s" podCreationTimestamp="2026-02-17 15:09:55 +0000 UTC" firstStartedPulling="2026-02-17 15:14:06.636908845 +0000 UTC m=+679.389251925" lastFinishedPulling="2026-02-17 15:14:08.128904195 +0000 UTC m=+680.881247245" observedRunningTime="2026-02-17 15:14:08.705637642 +0000 UTC m=+681.457980692" watchObservedRunningTime="2026-02-17 15:14:08.711723485 +0000 UTC m=+681.464066545" Feb 17 15:14:08.740848 master-0 kubenswrapper[8018]: I0217 15:14:08.740810 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kl9jm"] Feb 17 15:14:08.743490 master-0 kubenswrapper[8018]: I0217 15:14:08.743376 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kl9jm"] Feb 17 15:14:08.769347 master-0 kubenswrapper[8018]: I0217 15:14:08.769290 8018 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9501c813-f993-4916-94fc-878138ac027b-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:08.769347 master-0 kubenswrapper[8018]: I0217 15:14:08.769340 8018 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9501c813-f993-4916-94fc-878138ac027b-ready\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:08.769347 master-0 kubenswrapper[8018]: I0217 15:14:08.769350 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4kds\" (UniqueName: \"kubernetes.io/projected/9501c813-f993-4916-94fc-878138ac027b-kube-api-access-m4kds\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:08.983180 master-0 kubenswrapper[8018]: I0217 15:14:08.983126 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:08.983180 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:08.983180 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:08.983180 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:08.983180 master-0 kubenswrapper[8018]: I0217 15:14:08.983190 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:08.992002 master-0 kubenswrapper[8018]: I0217 15:14:08.991955 8018 scope.go:117] "RemoveContainer" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" Feb 17 15:14:08.992473 master-0 kubenswrapper[8018]: E0217 15:14:08.992412 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c\": container with ID starting with 172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c not found: ID does not exist" containerID="172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c" Feb 17 15:14:08.992473 master-0 kubenswrapper[8018]: I0217 15:14:08.992443 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c"} err="failed to get container status \"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c\": rpc error: code = NotFound desc = could not find container \"172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c\": container with ID starting with 172215d71309d64e20a3f0e330edf39f5e0f57d832dd537817f7abdf9953ab7c not found: ID does not exist" Feb 17 15:14:09.452247 master-0 kubenswrapper[8018]: I0217 15:14:09.452191 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9501c813-f993-4916-94fc-878138ac027b" path="/var/lib/kubelet/pods/9501c813-f993-4916-94fc-878138ac027b/volumes" Feb 17 15:14:09.698428 master-0 kubenswrapper[8018]: I0217 15:14:09.698364 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" event={"ID":"76d3da23-3347-4a5c-b328-d92671897ecc","Type":"ContainerStarted","Data":"cd41dc79695d9c0bd45ab8f72b3cf6af9d3af76fe51f2138f55c128fc6c09071"} Feb 17 15:14:09.719288 master-0 kubenswrapper[8018]: I0217 15:14:09.719178 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" podStartSLOduration=251.750560398 podStartE2EDuration="4m13.719156092s" podCreationTimestamp="2026-02-17 15:09:56 +0000 UTC" firstStartedPulling="2026-02-17 15:14:07.059689851 +0000 UTC m=+679.812032941" lastFinishedPulling="2026-02-17 15:14:09.028285585 +0000 UTC m=+681.780628635" observedRunningTime="2026-02-17 15:14:09.716876146 +0000 UTC m=+682.469219236" watchObservedRunningTime="2026-02-17 15:14:09.719156092 +0000 UTC m=+682.471499182" Feb 17 15:14:09.991263 master-0 kubenswrapper[8018]: I0217 15:14:09.983098 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:09.991263 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:09.991263 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:09.991263 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:09.991263 master-0 kubenswrapper[8018]: I0217 15:14:09.983198 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:10.283701 master-0 kubenswrapper[8018]: I0217 15:14:10.283122 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8"] Feb 17 15:14:10.285152 master-0 kubenswrapper[8018]: E0217 15:14:10.284552 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:14:10.285152 master-0 kubenswrapper[8018]: I0217 15:14:10.284575 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:14:10.285152 master-0 kubenswrapper[8018]: I0217 15:14:10.284773 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="9501c813-f993-4916-94fc-878138ac027b" containerName="kube-multus-additional-cni-plugins" Feb 17 15:14:10.286943 master-0 kubenswrapper[8018]: I0217 15:14:10.286350 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.289841 master-0 kubenswrapper[8018]: I0217 15:14:10.289789 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dzmf4" Feb 17 15:14:10.297999 master-0 kubenswrapper[8018]: I0217 15:14:10.297929 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 15:14:10.298766 master-0 kubenswrapper[8018]: I0217 15:14:10.298717 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 15:14:10.299025 master-0 kubenswrapper[8018]: I0217 15:14:10.298987 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-rttp2"] Feb 17 15:14:10.300803 master-0 kubenswrapper[8018]: I0217 15:14:10.300773 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.308852 master-0 kubenswrapper[8018]: I0217 15:14:10.302350 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 15:14:10.308852 master-0 kubenswrapper[8018]: I0217 15:14:10.303546 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-kt686" Feb 17 15:14:10.308852 master-0 kubenswrapper[8018]: I0217 15:14:10.303657 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 15:14:10.308852 master-0 kubenswrapper[8018]: I0217 15:14:10.304505 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8"] Feb 17 15:14:10.308852 master-0 kubenswrapper[8018]: I0217 15:14:10.307441 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs"] Feb 17 15:14:10.309214 master-0 kubenswrapper[8018]: I0217 15:14:10.308953 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.314326 master-0 kubenswrapper[8018]: I0217 15:14:10.313093 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 15:14:10.314326 master-0 kubenswrapper[8018]: I0217 15:14:10.313333 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jd7jr" Feb 17 15:14:10.314326 master-0 kubenswrapper[8018]: I0217 15:14:10.313519 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 15:14:10.314326 master-0 kubenswrapper[8018]: I0217 15:14:10.313657 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 15:14:10.344898 master-0 kubenswrapper[8018]: I0217 15:14:10.344791 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs"] Feb 17 15:14:10.400190 master-0 kubenswrapper[8018]: I0217 15:14:10.400131 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400206 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400241 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcj2\" (UniqueName: \"kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400280 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400305 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fj8w\" (UniqueName: \"kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400327 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400351 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400379 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400402 master-0 kubenswrapper[8018]: I0217 15:14:10.400405 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.400657 master-0 kubenswrapper[8018]: I0217 15:14:10.400429 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.400657 master-0 kubenswrapper[8018]: I0217 15:14:10.400470 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400657 master-0 kubenswrapper[8018]: I0217 15:14:10.400540 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5w6f\" (UniqueName: \"kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400657 master-0 kubenswrapper[8018]: I0217 15:14:10.400577 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.400657 master-0 kubenswrapper[8018]: I0217 15:14:10.400599 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400797 master-0 kubenswrapper[8018]: I0217 15:14:10.400743 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.400797 master-0 kubenswrapper[8018]: I0217 15:14:10.400775 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.400859 master-0 kubenswrapper[8018]: I0217 15:14:10.400804 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.400859 master-0 kubenswrapper[8018]: I0217 15:14:10.400835 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.441161 master-0 kubenswrapper[8018]: I0217 15:14:10.441034 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:14:10.441327 master-0 kubenswrapper[8018]: E0217 15:14:10.441287 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:14:10.502780 master-0 kubenswrapper[8018]: I0217 15:14:10.502713 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.502780 master-0 kubenswrapper[8018]: I0217 15:14:10.502768 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503038 master-0 kubenswrapper[8018]: I0217 15:14:10.502923 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.503038 master-0 kubenswrapper[8018]: I0217 15:14:10.502961 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503038 master-0 kubenswrapper[8018]: I0217 15:14:10.502974 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503038 master-0 kubenswrapper[8018]: I0217 15:14:10.502987 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503158 master-0 kubenswrapper[8018]: I0217 15:14:10.503083 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503158 master-0 kubenswrapper[8018]: I0217 15:14:10.503134 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.503214 master-0 kubenswrapper[8018]: I0217 15:14:10.503193 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503269 master-0 kubenswrapper[8018]: I0217 15:14:10.503241 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcj2\" (UniqueName: \"kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503355 master-0 kubenswrapper[8018]: I0217 15:14:10.503327 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503355 master-0 kubenswrapper[8018]: I0217 15:14:10.503353 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503434 master-0 kubenswrapper[8018]: I0217 15:14:10.503369 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fj8w\" (UniqueName: \"kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.503434 master-0 kubenswrapper[8018]: I0217 15:14:10.503398 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503529 master-0 kubenswrapper[8018]: I0217 15:14:10.503439 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503529 master-0 kubenswrapper[8018]: I0217 15:14:10.503471 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503529 master-0 kubenswrapper[8018]: I0217 15:14:10.503496 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.503529 master-0 kubenswrapper[8018]: I0217 15:14:10.503518 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503655 master-0 kubenswrapper[8018]: I0217 15:14:10.503625 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5w6f\" (UniqueName: \"kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503762 master-0 kubenswrapper[8018]: E0217 15:14:10.503725 8018 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 17 15:14:10.503820 master-0 kubenswrapper[8018]: I0217 15:14:10.503776 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.503820 master-0 kubenswrapper[8018]: E0217 15:14:10.503807 8018 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls podName:cdbde712-c8dd-4011-adcb-af895abce94c nodeName:}" failed. No retries permitted until 2026-02-17 15:14:11.003774483 +0000 UTC m=+683.756117533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-b4xl8" (UID: "cdbde712-c8dd-4011-adcb-af895abce94c") : secret "openshift-state-metrics-tls" not found Feb 17 15:14:10.504173 master-0 kubenswrapper[8018]: I0217 15:14:10.504141 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.504243 master-0 kubenswrapper[8018]: I0217 15:14:10.504174 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.504424 master-0 kubenswrapper[8018]: I0217 15:14:10.504385 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.504912 master-0 kubenswrapper[8018]: I0217 15:14:10.504876 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.505519 master-0 kubenswrapper[8018]: I0217 15:14:10.505480 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.505593 master-0 kubenswrapper[8018]: I0217 15:14:10.505561 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.505641 master-0 kubenswrapper[8018]: I0217 15:14:10.505587 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.505815 master-0 kubenswrapper[8018]: I0217 15:14:10.505781 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.506021 master-0 kubenswrapper[8018]: I0217 15:14:10.505989 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.507079 master-0 kubenswrapper[8018]: I0217 15:14:10.507039 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.508767 master-0 kubenswrapper[8018]: I0217 15:14:10.508728 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.509799 master-0 kubenswrapper[8018]: I0217 15:14:10.509752 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.525608 master-0 kubenswrapper[8018]: I0217 15:14:10.525533 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcj2\" (UniqueName: \"kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.528902 master-0 kubenswrapper[8018]: I0217 15:14:10.528861 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5w6f\" (UniqueName: \"kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.537949 master-0 kubenswrapper[8018]: I0217 15:14:10.537853 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fj8w\" (UniqueName: \"kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:10.660639 master-0 kubenswrapper[8018]: I0217 15:14:10.660570 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:14:10.675190 master-0 kubenswrapper[8018]: I0217 15:14:10.675154 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:14:10.712199 master-0 kubenswrapper[8018]: I0217 15:14:10.712145 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rttp2" event={"ID":"c435347a-ac01-46af-8192-9ef2d632bdfb","Type":"ContainerStarted","Data":"30157c99e347dac95082456d5e90aaa231761068887f6a65d5089463dbf44226"} Feb 17 15:14:10.982993 master-0 kubenswrapper[8018]: I0217 15:14:10.982890 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:10.982993 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:10.982993 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:10.982993 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:10.983313 master-0 kubenswrapper[8018]: I0217 15:14:10.983006 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:11.011005 master-0 kubenswrapper[8018]: I0217 15:14:11.010914 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:11.014761 master-0 kubenswrapper[8018]: I0217 15:14:11.014698 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:11.232637 master-0 kubenswrapper[8018]: I0217 15:14:11.232524 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:14:11.647225 master-0 kubenswrapper[8018]: I0217 15:14:11.647076 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs"] Feb 17 15:14:11.720847 master-0 kubenswrapper[8018]: I0217 15:14:11.720752 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" event={"ID":"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040","Type":"ContainerStarted","Data":"026610117c01997654c9e952b5a30927858c6efbfd458d75332f24ab296e1898"} Feb 17 15:14:11.983097 master-0 kubenswrapper[8018]: I0217 15:14:11.982911 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:11.983097 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:11.983097 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:11.983097 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:11.983097 master-0 kubenswrapper[8018]: I0217 15:14:11.983036 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:12.012168 master-0 kubenswrapper[8018]: I0217 15:14:12.005728 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:12.012168 master-0 kubenswrapper[8018]: I0217 15:14:12.010216 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.018333 master-0 kubenswrapper[8018]: I0217 15:14:12.015799 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:14:12.018333 master-0 kubenswrapper[8018]: I0217 15:14:12.016181 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-87grw" Feb 17 15:14:12.140896 master-0 kubenswrapper[8018]: I0217 15:14:12.140853 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.141134 master-0 kubenswrapper[8018]: I0217 15:14:12.141117 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.141296 master-0 kubenswrapper[8018]: I0217 15:14:12.141277 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.243385 master-0 kubenswrapper[8018]: I0217 15:14:12.243233 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.244042 master-0 kubenswrapper[8018]: I0217 15:14:12.243996 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.244183 master-0 kubenswrapper[8018]: I0217 15:14:12.244084 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.244265 master-0 kubenswrapper[8018]: I0217 15:14:12.244189 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.244348 master-0 kubenswrapper[8018]: I0217 15:14:12.244274 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.463626 master-0 kubenswrapper[8018]: I0217 15:14:12.461729 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:12.472109 master-0 kubenswrapper[8018]: I0217 15:14:12.472039 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8"] Feb 17 15:14:12.736510 master-0 kubenswrapper[8018]: I0217 15:14:12.736367 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" event={"ID":"cdbde712-c8dd-4011-adcb-af895abce94c","Type":"ContainerStarted","Data":"2f38747bdec24188d4ffe8cfb159d9a08ab099ae4fe10c6fb530c6bc6745fe0f"} Feb 17 15:14:12.849762 master-0 kubenswrapper[8018]: I0217 15:14:12.849658 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.936435 master-0 kubenswrapper[8018]: I0217 15:14:12.936339 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:12.982720 master-0 kubenswrapper[8018]: I0217 15:14:12.982654 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:12.982720 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:12.982720 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:12.982720 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:12.983068 master-0 kubenswrapper[8018]: I0217 15:14:12.982727 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:13.747734 master-0 kubenswrapper[8018]: I0217 15:14:13.747606 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" event={"ID":"cdbde712-c8dd-4011-adcb-af895abce94c","Type":"ContainerStarted","Data":"938a47af96c68e71d400d544dbfb3ecedf25ac6d3fdd60c653eb01b37f21885c"} Feb 17 15:14:13.982622 master-0 kubenswrapper[8018]: I0217 15:14:13.982563 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:13.982622 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:13.982622 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:13.982622 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:13.982851 master-0 kubenswrapper[8018]: I0217 15:14:13.982635 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:14.757221 master-0 kubenswrapper[8018]: I0217 15:14:14.757169 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" event={"ID":"cdbde712-c8dd-4011-adcb-af895abce94c","Type":"ContainerStarted","Data":"af24ad3253a2b786610c20e5fcf43833b49dde4a8c60af16a1bcdbd2dd87abb8"} Feb 17 15:14:14.794524 master-0 kubenswrapper[8018]: I0217 15:14:14.794438 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:14.982924 master-0 kubenswrapper[8018]: I0217 15:14:14.982853 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:14.982924 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:14.982924 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:14.982924 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:14.983276 master-0 kubenswrapper[8018]: I0217 15:14:14.982953 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:15.551038 master-0 kubenswrapper[8018]: I0217 15:14:15.550978 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:14:15.552168 master-0 kubenswrapper[8018]: I0217 15:14:15.552133 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.555430 master-0 kubenswrapper[8018]: I0217 15:14:15.555375 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 17 15:14:15.556007 master-0 kubenswrapper[8018]: I0217 15:14:15.555946 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-wc6mx" Feb 17 15:14:15.556845 master-0 kubenswrapper[8018]: I0217 15:14:15.556213 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 17 15:14:15.556845 master-0 kubenswrapper[8018]: I0217 15:14:15.556436 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 17 15:14:15.556845 master-0 kubenswrapper[8018]: I0217 15:14:15.556661 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 17 15:14:15.556845 master-0 kubenswrapper[8018]: I0217 15:14:15.556722 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 17 15:14:15.560926 master-0 kubenswrapper[8018]: I0217 15:14:15.560883 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 17 15:14:15.582121 master-0 kubenswrapper[8018]: I0217 15:14:15.580289 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:14:15.709167 master-0 kubenswrapper[8018]: I0217 15:14:15.709121 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709167 master-0 kubenswrapper[8018]: I0217 15:14:15.709165 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709328 master-0 kubenswrapper[8018]: I0217 15:14:15.709199 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709328 master-0 kubenswrapper[8018]: I0217 15:14:15.709256 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709328 master-0 kubenswrapper[8018]: I0217 15:14:15.709276 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709328 master-0 kubenswrapper[8018]: I0217 15:14:15.709317 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709465 master-0 kubenswrapper[8018]: I0217 15:14:15.709350 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.709465 master-0 kubenswrapper[8018]: I0217 15:14:15.709376 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.766061 master-0 kubenswrapper[8018]: I0217 15:14:15.765990 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rttp2" event={"ID":"c435347a-ac01-46af-8192-9ef2d632bdfb","Type":"ContainerStarted","Data":"ad81a3d8018f32fa460ffaba8c0d9ddd5cc3830a37ff5ffabe629586df64d1c4"} Feb 17 15:14:15.770212 master-0 kubenswrapper[8018]: I0217 15:14:15.769377 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"2d356564-2127-4da8-9074-13dd40019e26","Type":"ContainerStarted","Data":"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857"} Feb 17 15:14:15.770212 master-0 kubenswrapper[8018]: I0217 15:14:15.769413 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"2d356564-2127-4da8-9074-13dd40019e26","Type":"ContainerStarted","Data":"1316976afbb94d8ce0d2bba9ea4633fbdd18ee35524ef1d44e0aa9fda6ea6d1d"} Feb 17 15:14:15.802696 master-0 kubenswrapper[8018]: I0217 15:14:15.799875 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=4.7998545759999995 podStartE2EDuration="4.799854576s" podCreationTimestamp="2026-02-17 15:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:14:15.798246476 +0000 UTC m=+688.550589536" watchObservedRunningTime="2026-02-17 15:14:15.799854576 +0000 UTC m=+688.552197626" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811297 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811353 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811381 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811434 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811633 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811768 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811795 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.811842 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.812495 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.813321 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.813624 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.815493 master-0 kubenswrapper[8018]: I0217 15:14:15.815253 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.816177 master-0 kubenswrapper[8018]: I0217 15:14:15.816047 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.818518 master-0 kubenswrapper[8018]: I0217 15:14:15.816479 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.818518 master-0 kubenswrapper[8018]: I0217 15:14:15.816560 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.830840 master-0 kubenswrapper[8018]: I0217 15:14:15.830785 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.979681 master-0 kubenswrapper[8018]: I0217 15:14:15.979579 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:14:15.981496 master-0 kubenswrapper[8018]: I0217 15:14:15.981464 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:15.981496 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:15.981496 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:15.981496 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:15.981669 master-0 kubenswrapper[8018]: I0217 15:14:15.981513 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:16.449416 master-0 kubenswrapper[8018]: I0217 15:14:16.448834 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:14:16.470907 master-0 kubenswrapper[8018]: W0217 15:14:16.470856 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8379aee6_f810_4e5f_b209_8f6cb5f87df0.slice/crio-c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e WatchSource:0}: Error finding container c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e: Status 404 returned error can't find the container with id c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e Feb 17 15:14:16.779122 master-0 kubenswrapper[8018]: I0217 15:14:16.778990 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerStarted","Data":"c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e"} Feb 17 15:14:16.781319 master-0 kubenswrapper[8018]: I0217 15:14:16.781263 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-fzfsp_6b25a72d-965f-415c-abc9-09612859e9e0/multus-admission-controller/0.log" Feb 17 15:14:16.781398 master-0 kubenswrapper[8018]: I0217 15:14:16.781374 8018 generic.go:334] "Generic (PLEG): container finished" podID="6b25a72d-965f-415c-abc9-09612859e9e0" containerID="d03b5b01eebc01049f52508b9cb6557295a244f02f7925b66faf26d4de1e8764" exitCode=137 Feb 17 15:14:16.781632 master-0 kubenswrapper[8018]: I0217 15:14:16.781528 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerDied","Data":"d03b5b01eebc01049f52508b9cb6557295a244f02f7925b66faf26d4de1e8764"} Feb 17 15:14:16.783523 master-0 kubenswrapper[8018]: I0217 15:14:16.783480 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" event={"ID":"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040","Type":"ContainerStarted","Data":"fc4eee118a51f1e27736e9a0cbfa7f021e469090214775b8f7f70a19f3a48ea1"} Feb 17 15:14:16.783523 master-0 kubenswrapper[8018]: I0217 15:14:16.783523 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" event={"ID":"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040","Type":"ContainerStarted","Data":"7f974650844428538fc93d41ba4683bc5f18e30ccfdd07b3cfdd59b4995527af"} Feb 17 15:14:16.783653 master-0 kubenswrapper[8018]: I0217 15:14:16.783534 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" event={"ID":"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040","Type":"ContainerStarted","Data":"a6189ac9a6b12830828937bb61a57fc59e733e6949dfea3fac92735c92e34dd2"} Feb 17 15:14:16.785326 master-0 kubenswrapper[8018]: I0217 15:14:16.785268 8018 generic.go:334] "Generic (PLEG): container finished" podID="c435347a-ac01-46af-8192-9ef2d632bdfb" containerID="ad81a3d8018f32fa460ffaba8c0d9ddd5cc3830a37ff5ffabe629586df64d1c4" exitCode=0 Feb 17 15:14:16.785407 master-0 kubenswrapper[8018]: I0217 15:14:16.785330 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rttp2" event={"ID":"c435347a-ac01-46af-8192-9ef2d632bdfb","Type":"ContainerDied","Data":"ad81a3d8018f32fa460ffaba8c0d9ddd5cc3830a37ff5ffabe629586df64d1c4"} Feb 17 15:14:16.803181 master-0 kubenswrapper[8018]: I0217 15:14:16.803098 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" podStartSLOduration=2.383186811 podStartE2EDuration="6.803071919s" podCreationTimestamp="2026-02-17 15:14:10 +0000 UTC" firstStartedPulling="2026-02-17 15:14:11.649338815 +0000 UTC m=+684.401681905" lastFinishedPulling="2026-02-17 15:14:16.069223963 +0000 UTC m=+688.821567013" observedRunningTime="2026-02-17 15:14:16.802940486 +0000 UTC m=+689.555283536" watchObservedRunningTime="2026-02-17 15:14:16.803071919 +0000 UTC m=+689.555414969" Feb 17 15:14:16.983295 master-0 kubenswrapper[8018]: I0217 15:14:16.983238 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:16.983295 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:16.983295 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:16.983295 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:16.983673 master-0 kubenswrapper[8018]: I0217 15:14:16.983299 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:17.020083 master-0 kubenswrapper[8018]: I0217 15:14:17.020036 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-fzfsp_6b25a72d-965f-415c-abc9-09612859e9e0/multus-admission-controller/0.log" Feb 17 15:14:17.020283 master-0 kubenswrapper[8018]: I0217 15:14:17.020102 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:14:17.132530 master-0 kubenswrapper[8018]: I0217 15:14:17.132484 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") pod \"6b25a72d-965f-415c-abc9-09612859e9e0\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " Feb 17 15:14:17.132769 master-0 kubenswrapper[8018]: I0217 15:14:17.132694 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") pod \"6b25a72d-965f-415c-abc9-09612859e9e0\" (UID: \"6b25a72d-965f-415c-abc9-09612859e9e0\") " Feb 17 15:14:17.136773 master-0 kubenswrapper[8018]: I0217 15:14:17.136712 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "6b25a72d-965f-415c-abc9-09612859e9e0" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:14:17.139310 master-0 kubenswrapper[8018]: I0217 15:14:17.139197 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m" (OuterVolumeSpecName: "kube-api-access-fv46m") pod "6b25a72d-965f-415c-abc9-09612859e9e0" (UID: "6b25a72d-965f-415c-abc9-09612859e9e0"). InnerVolumeSpecName "kube-api-access-fv46m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:17.234172 master-0 kubenswrapper[8018]: I0217 15:14:17.234068 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv46m\" (UniqueName: \"kubernetes.io/projected/6b25a72d-965f-415c-abc9-09612859e9e0-kube-api-access-fv46m\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:17.234172 master-0 kubenswrapper[8018]: I0217 15:14:17.234109 8018 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b25a72d-965f-415c-abc9-09612859e9e0-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:17.352833 master-0 kubenswrapper[8018]: I0217 15:14:17.352622 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:14:17.353412 master-0 kubenswrapper[8018]: E0217 15:14:17.353392 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="multus-admission-controller" Feb 17 15:14:17.353522 master-0 kubenswrapper[8018]: I0217 15:14:17.353509 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="multus-admission-controller" Feb 17 15:14:17.356268 master-0 kubenswrapper[8018]: E0217 15:14:17.353736 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="kube-rbac-proxy" Feb 17 15:14:17.356443 master-0 kubenswrapper[8018]: I0217 15:14:17.356430 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="kube-rbac-proxy" Feb 17 15:14:17.356735 master-0 kubenswrapper[8018]: I0217 15:14:17.356721 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="multus-admission-controller" Feb 17 15:14:17.356838 master-0 kubenswrapper[8018]: I0217 15:14:17.356827 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" containerName="kube-rbac-proxy" Feb 17 15:14:17.357500 master-0 kubenswrapper[8018]: I0217 15:14:17.357484 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.361376 master-0 kubenswrapper[8018]: I0217 15:14:17.361308 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 15:14:17.361376 master-0 kubenswrapper[8018]: I0217 15:14:17.361352 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 15:14:17.361555 master-0 kubenswrapper[8018]: I0217 15:14:17.361533 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 15:14:17.361797 master-0 kubenswrapper[8018]: I0217 15:14:17.361745 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gbdz4" Feb 17 15:14:17.361898 master-0 kubenswrapper[8018]: I0217 15:14:17.361874 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-aaauri1gstf68" Feb 17 15:14:17.361961 master-0 kubenswrapper[8018]: I0217 15:14:17.361766 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 15:14:17.390846 master-0 kubenswrapper[8018]: I0217 15:14:17.390801 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.539388 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.539555 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.539814 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.539870 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.539897 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.540018 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.540798 master-0 kubenswrapper[8018]: I0217 15:14:17.540084 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.641348 master-0 kubenswrapper[8018]: I0217 15:14:17.641236 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.641348 master-0 kubenswrapper[8018]: I0217 15:14:17.641298 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.641653 master-0 kubenswrapper[8018]: I0217 15:14:17.641443 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.641947 master-0 kubenswrapper[8018]: I0217 15:14:17.641878 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.641947 master-0 kubenswrapper[8018]: I0217 15:14:17.641921 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.642047 master-0 kubenswrapper[8018]: I0217 15:14:17.641953 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.642330 master-0 kubenswrapper[8018]: I0217 15:14:17.642290 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.642611 master-0 kubenswrapper[8018]: I0217 15:14:17.642539 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.642926 master-0 kubenswrapper[8018]: I0217 15:14:17.642879 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.644035 master-0 kubenswrapper[8018]: I0217 15:14:17.643942 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.646328 master-0 kubenswrapper[8018]: I0217 15:14:17.646290 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.647192 master-0 kubenswrapper[8018]: I0217 15:14:17.647136 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.651106 master-0 kubenswrapper[8018]: I0217 15:14:17.651069 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.668916 master-0 kubenswrapper[8018]: I0217 15:14:17.668860 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.717814 master-0 kubenswrapper[8018]: I0217 15:14:17.717769 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:17.794249 master-0 kubenswrapper[8018]: I0217 15:14:17.794183 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rttp2" event={"ID":"c435347a-ac01-46af-8192-9ef2d632bdfb","Type":"ContainerStarted","Data":"b639ef9b07c0956ea1c1a9c50ca14119093c199b116c34d4e71cdc6a1348cb51"} Feb 17 15:14:17.794249 master-0 kubenswrapper[8018]: I0217 15:14:17.794244 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rttp2" event={"ID":"c435347a-ac01-46af-8192-9ef2d632bdfb","Type":"ContainerStarted","Data":"a320c17bd7bad55a422b89f7f75752e70f061faad309627d0394fbfe17842f41"} Feb 17 15:14:17.798069 master-0 kubenswrapper[8018]: I0217 15:14:17.798006 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-fzfsp_6b25a72d-965f-415c-abc9-09612859e9e0/multus-admission-controller/0.log" Feb 17 15:14:17.798283 master-0 kubenswrapper[8018]: I0217 15:14:17.798150 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" event={"ID":"6b25a72d-965f-415c-abc9-09612859e9e0","Type":"ContainerDied","Data":"cf71d0b2feed9834ef8b72ab6dd9daecd0f98a4f5152569a06e215023a03601e"} Feb 17 15:14:17.798426 master-0 kubenswrapper[8018]: I0217 15:14:17.798345 8018 scope.go:117] "RemoveContainer" containerID="58400ac8b210abe6d74d057999272a3e2cdb3a6a4ce0fbdbf1173716a460becc" Feb 17 15:14:17.798626 master-0 kubenswrapper[8018]: I0217 15:14:17.798550 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-fzfsp" Feb 17 15:14:17.813668 master-0 kubenswrapper[8018]: I0217 15:14:17.808585 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" event={"ID":"cdbde712-c8dd-4011-adcb-af895abce94c","Type":"ContainerStarted","Data":"adc399560c9bd5a64d5ac0ce379697c36fc11dd6f80434148e7917f01877bc55"} Feb 17 15:14:17.839860 master-0 kubenswrapper[8018]: I0217 15:14:17.839800 8018 scope.go:117] "RemoveContainer" containerID="d03b5b01eebc01049f52508b9cb6557295a244f02f7925b66faf26d4de1e8764" Feb 17 15:14:17.850866 master-0 kubenswrapper[8018]: I0217 15:14:17.849950 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-rttp2" podStartSLOduration=3.050631822 podStartE2EDuration="7.849933796s" podCreationTimestamp="2026-02-17 15:14:10 +0000 UTC" firstStartedPulling="2026-02-17 15:14:10.694035594 +0000 UTC m=+683.446378654" lastFinishedPulling="2026-02-17 15:14:15.493337588 +0000 UTC m=+688.245680628" observedRunningTime="2026-02-17 15:14:17.819701818 +0000 UTC m=+690.572044898" watchObservedRunningTime="2026-02-17 15:14:17.849933796 +0000 UTC m=+690.602276846" Feb 17 15:14:17.862061 master-0 kubenswrapper[8018]: I0217 15:14:17.862013 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:14:17.878300 master-0 kubenswrapper[8018]: I0217 15:14:17.878207 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-fzfsp"] Feb 17 15:14:17.884907 master-0 kubenswrapper[8018]: I0217 15:14:17.884723 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" podStartSLOduration=5.909154138 podStartE2EDuration="7.884704568s" podCreationTimestamp="2026-02-17 15:14:10 +0000 UTC" firstStartedPulling="2026-02-17 15:14:14.985384168 +0000 UTC m=+687.737727218" lastFinishedPulling="2026-02-17 15:14:16.960934598 +0000 UTC m=+689.713277648" observedRunningTime="2026-02-17 15:14:17.882322508 +0000 UTC m=+690.634665558" watchObservedRunningTime="2026-02-17 15:14:17.884704568 +0000 UTC m=+690.637047618" Feb 17 15:14:17.983334 master-0 kubenswrapper[8018]: I0217 15:14:17.982443 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:17.983334 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:17.983334 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:17.983334 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:17.983334 master-0 kubenswrapper[8018]: I0217 15:14:17.982548 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:18.164270 master-0 kubenswrapper[8018]: I0217 15:14:18.164210 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:14:18.285616 master-0 kubenswrapper[8018]: I0217 15:14:18.285496 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:18.285807 master-0 kubenswrapper[8018]: I0217 15:14:18.285712 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="2d356564-2127-4da8-9074-13dd40019e26" containerName="installer" containerID="cri-o://55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857" gracePeriod=30 Feb 17 15:14:18.603757 master-0 kubenswrapper[8018]: W0217 15:14:18.603565 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c393109_8c98_4a73_be1a_608038e5d094.slice/crio-80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09 WatchSource:0}: Error finding container 80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09: Status 404 returned error can't find the container with id 80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09 Feb 17 15:14:18.815577 master-0 kubenswrapper[8018]: I0217 15:14:18.815529 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" event={"ID":"7c393109-8c98-4a73-be1a-608038e5d094","Type":"ContainerStarted","Data":"80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09"} Feb 17 15:14:18.983679 master-0 kubenswrapper[8018]: I0217 15:14:18.983592 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:18.983679 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:18.983679 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:18.983679 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:18.983679 master-0 kubenswrapper[8018]: I0217 15:14:18.983672 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:19.453107 master-0 kubenswrapper[8018]: I0217 15:14:19.453028 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b25a72d-965f-415c-abc9-09612859e9e0" path="/var/lib/kubelet/pods/6b25a72d-965f-415c-abc9-09612859e9e0/volumes" Feb 17 15:14:19.823477 master-0 kubenswrapper[8018]: I0217 15:14:19.823341 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerStarted","Data":"d8f5d8e5601a1e0de83d6a922182ed26b2fc744ebae08cdcc7739ae26257bd02"} Feb 17 15:14:19.982047 master-0 kubenswrapper[8018]: I0217 15:14:19.981996 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:19.982047 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:19.982047 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:19.982047 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:19.982293 master-0 kubenswrapper[8018]: I0217 15:14:19.982067 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:20.983174 master-0 kubenswrapper[8018]: I0217 15:14:20.983110 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:20.983174 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:20.983174 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:20.983174 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:20.984100 master-0 kubenswrapper[8018]: I0217 15:14:20.983193 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:21.836779 master-0 kubenswrapper[8018]: I0217 15:14:21.836713 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ee3c8b34-0581-45d6-a8ff-3959d5651eba/installer/0.log" Feb 17 15:14:21.836779 master-0 kubenswrapper[8018]: I0217 15:14:21.836776 8018 generic.go:334] "Generic (PLEG): container finished" podID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" containerID="e7703c1f1874a39177afc2874af2b734f2df8f64b07f87c0d3a00c3a8993072f" exitCode=1 Feb 17 15:14:21.837053 master-0 kubenswrapper[8018]: I0217 15:14:21.836826 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ee3c8b34-0581-45d6-a8ff-3959d5651eba","Type":"ContainerDied","Data":"e7703c1f1874a39177afc2874af2b734f2df8f64b07f87c0d3a00c3a8993072f"} Feb 17 15:14:21.838806 master-0 kubenswrapper[8018]: I0217 15:14:21.838765 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerStarted","Data":"17ed6cea7264bf0a4aee500a4d88ade7ea2777ab27aa21f615eafa009fe91ae7"} Feb 17 15:14:21.838882 master-0 kubenswrapper[8018]: I0217 15:14:21.838809 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerStarted","Data":"814f18394f5d77f7d7fe55ef10f2d92ca387fc05357af1309dc48dc0fb7b66a7"} Feb 17 15:14:21.842117 master-0 kubenswrapper[8018]: I0217 15:14:21.842062 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" event={"ID":"7c393109-8c98-4a73-be1a-608038e5d094","Type":"ContainerStarted","Data":"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3"} Feb 17 15:14:21.872966 master-0 kubenswrapper[8018]: I0217 15:14:21.872507 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" podStartSLOduration=2.556195418 podStartE2EDuration="6.87249118s" podCreationTimestamp="2026-02-17 15:14:15 +0000 UTC" firstStartedPulling="2026-02-17 15:14:16.476033616 +0000 UTC m=+689.228376686" lastFinishedPulling="2026-02-17 15:14:20.792329398 +0000 UTC m=+693.544672448" observedRunningTime="2026-02-17 15:14:21.865923667 +0000 UTC m=+694.618266727" watchObservedRunningTime="2026-02-17 15:14:21.87249118 +0000 UTC m=+694.624834230" Feb 17 15:14:21.894717 master-0 kubenswrapper[8018]: I0217 15:14:21.894133 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" podStartSLOduration=2.7110587969999997 podStartE2EDuration="4.894116204s" podCreationTimestamp="2026-02-17 15:14:17 +0000 UTC" firstStartedPulling="2026-02-17 15:14:18.607935748 +0000 UTC m=+691.360278828" lastFinishedPulling="2026-02-17 15:14:20.790993145 +0000 UTC m=+693.543336235" observedRunningTime="2026-02-17 15:14:21.890688427 +0000 UTC m=+694.643031487" watchObservedRunningTime="2026-02-17 15:14:21.894116204 +0000 UTC m=+694.646459254" Feb 17 15:14:21.982197 master-0 kubenswrapper[8018]: I0217 15:14:21.982006 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:21.982197 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:21.982197 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:21.982197 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:21.982197 master-0 kubenswrapper[8018]: I0217 15:14:21.982084 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:22.123006 master-0 kubenswrapper[8018]: I0217 15:14:22.122933 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ee3c8b34-0581-45d6-a8ff-3959d5651eba/installer/0.log" Feb 17 15:14:22.123533 master-0 kubenswrapper[8018]: I0217 15:14:22.123041 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:14:22.212944 master-0 kubenswrapper[8018]: I0217 15:14:22.212876 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access\") pod \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " Feb 17 15:14:22.213163 master-0 kubenswrapper[8018]: I0217 15:14:22.212968 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock\") pod \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " Feb 17 15:14:22.213163 master-0 kubenswrapper[8018]: I0217 15:14:22.213036 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir\") pod \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\" (UID: \"ee3c8b34-0581-45d6-a8ff-3959d5651eba\") " Feb 17 15:14:22.213310 master-0 kubenswrapper[8018]: I0217 15:14:22.213201 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock" (OuterVolumeSpecName: "var-lock") pod "ee3c8b34-0581-45d6-a8ff-3959d5651eba" (UID: "ee3c8b34-0581-45d6-a8ff-3959d5651eba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:22.213364 master-0 kubenswrapper[8018]: I0217 15:14:22.213246 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee3c8b34-0581-45d6-a8ff-3959d5651eba" (UID: "ee3c8b34-0581-45d6-a8ff-3959d5651eba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:22.218477 master-0 kubenswrapper[8018]: I0217 15:14:22.218398 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee3c8b34-0581-45d6-a8ff-3959d5651eba" (UID: "ee3c8b34-0581-45d6-a8ff-3959d5651eba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:22.314998 master-0 kubenswrapper[8018]: I0217 15:14:22.314917 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:22.314998 master-0 kubenswrapper[8018]: I0217 15:14:22.314968 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:22.314998 master-0 kubenswrapper[8018]: I0217 15:14:22.314989 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3c8b34-0581-45d6-a8ff-3959d5651eba-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:22.440218 master-0 kubenswrapper[8018]: I0217 15:14:22.440033 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:14:22.440522 master-0 kubenswrapper[8018]: E0217 15:14:22.440356 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:14:22.851614 master-0 kubenswrapper[8018]: I0217 15:14:22.851553 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ee3c8b34-0581-45d6-a8ff-3959d5651eba/installer/0.log" Feb 17 15:14:22.851816 master-0 kubenswrapper[8018]: I0217 15:14:22.851680 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ee3c8b34-0581-45d6-a8ff-3959d5651eba","Type":"ContainerDied","Data":"01b9ebc25f2991ff17b90c43ce2febf6d4d9453e3d9f6e9c0161bb2a2c624c42"} Feb 17 15:14:22.851816 master-0 kubenswrapper[8018]: I0217 15:14:22.851758 8018 scope.go:117] "RemoveContainer" containerID="e7703c1f1874a39177afc2874af2b734f2df8f64b07f87c0d3a00c3a8993072f" Feb 17 15:14:22.851984 master-0 kubenswrapper[8018]: I0217 15:14:22.851952 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 17 15:14:22.888173 master-0 kubenswrapper[8018]: I0217 15:14:22.888116 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:22.888507 master-0 kubenswrapper[8018]: E0217 15:14:22.888485 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" containerName="installer" Feb 17 15:14:22.888566 master-0 kubenswrapper[8018]: I0217 15:14:22.888509 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" containerName="installer" Feb 17 15:14:22.888697 master-0 kubenswrapper[8018]: I0217 15:14:22.888669 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" containerName="installer" Feb 17 15:14:22.889185 master-0 kubenswrapper[8018]: I0217 15:14:22.889160 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:22.897437 master-0 kubenswrapper[8018]: I0217 15:14:22.897393 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:14:22.903028 master-0 kubenswrapper[8018]: I0217 15:14:22.902972 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 17 15:14:22.941497 master-0 kubenswrapper[8018]: I0217 15:14:22.941433 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:22.982478 master-0 kubenswrapper[8018]: I0217 15:14:22.982388 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:22.982478 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:22.982478 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:22.982478 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:22.982729 master-0 kubenswrapper[8018]: I0217 15:14:22.982491 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:23.030421 master-0 kubenswrapper[8018]: I0217 15:14:23.030334 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.031589 master-0 kubenswrapper[8018]: I0217 15:14:23.030861 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.032144 master-0 kubenswrapper[8018]: I0217 15:14:23.032102 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.134104 master-0 kubenswrapper[8018]: I0217 15:14:23.134002 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.134964 master-0 kubenswrapper[8018]: I0217 15:14:23.134147 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.134964 master-0 kubenswrapper[8018]: I0217 15:14:23.134222 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.134964 master-0 kubenswrapper[8018]: I0217 15:14:23.134260 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.134964 master-0 kubenswrapper[8018]: I0217 15:14:23.134141 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.169711 master-0 kubenswrapper[8018]: I0217 15:14:23.169586 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.220760 master-0 kubenswrapper[8018]: I0217 15:14:23.219715 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:23.452337 master-0 kubenswrapper[8018]: I0217 15:14:23.452214 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3c8b34-0581-45d6-a8ff-3959d5651eba" path="/var/lib/kubelet/pods/ee3c8b34-0581-45d6-a8ff-3959d5651eba/volumes" Feb 17 15:14:23.675341 master-0 kubenswrapper[8018]: I0217 15:14:23.674296 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:23.716936 master-0 kubenswrapper[8018]: W0217 15:14:23.716832 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0c360914_72ac_4400_83b1_2cfb3ad10f3b.slice/crio-847d0b3ef43c26f428b6f89391efb84649c6a5572ca012c4ce344eaebd768167 WatchSource:0}: Error finding container 847d0b3ef43c26f428b6f89391efb84649c6a5572ca012c4ce344eaebd768167: Status 404 returned error can't find the container with id 847d0b3ef43c26f428b6f89391efb84649c6a5572ca012c4ce344eaebd768167 Feb 17 15:14:23.864913 master-0 kubenswrapper[8018]: I0217 15:14:23.864847 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0c360914-72ac-4400-83b1-2cfb3ad10f3b","Type":"ContainerStarted","Data":"847d0b3ef43c26f428b6f89391efb84649c6a5572ca012c4ce344eaebd768167"} Feb 17 15:14:23.982123 master-0 kubenswrapper[8018]: I0217 15:14:23.982054 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:23.982123 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:23.982123 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:23.982123 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:23.982511 master-0 kubenswrapper[8018]: I0217 15:14:23.982140 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:24.873258 master-0 kubenswrapper[8018]: I0217 15:14:24.873203 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0c360914-72ac-4400-83b1-2cfb3ad10f3b","Type":"ContainerStarted","Data":"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e"} Feb 17 15:14:24.895767 master-0 kubenswrapper[8018]: I0217 15:14:24.895672 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.895647997 podStartE2EDuration="2.895647997s" podCreationTimestamp="2026-02-17 15:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:14:24.893058872 +0000 UTC m=+697.645401943" watchObservedRunningTime="2026-02-17 15:14:24.895647997 +0000 UTC m=+697.647991067" Feb 17 15:14:24.982595 master-0 kubenswrapper[8018]: I0217 15:14:24.982481 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:24.982595 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:24.982595 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:24.982595 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:24.983082 master-0 kubenswrapper[8018]: I0217 15:14:24.982603 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:25.982744 master-0 kubenswrapper[8018]: I0217 15:14:25.982648 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:25.982744 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:25.982744 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:25.982744 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:25.983716 master-0 kubenswrapper[8018]: I0217 15:14:25.982745 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:26.983083 master-0 kubenswrapper[8018]: I0217 15:14:26.982989 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:26.983083 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:26.983083 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:26.983083 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:26.984363 master-0 kubenswrapper[8018]: I0217 15:14:26.983081 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:27.984578 master-0 kubenswrapper[8018]: I0217 15:14:27.984487 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:27.984578 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:27.984578 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:27.984578 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:27.984578 master-0 kubenswrapper[8018]: I0217 15:14:27.984572 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:28.984069 master-0 kubenswrapper[8018]: I0217 15:14:28.983976 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:28.984069 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:28.984069 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:28.984069 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:28.984635 master-0 kubenswrapper[8018]: I0217 15:14:28.984087 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:29.984866 master-0 kubenswrapper[8018]: I0217 15:14:29.984088 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:29.984866 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:29.984866 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:29.984866 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:29.984866 master-0 kubenswrapper[8018]: I0217 15:14:29.984243 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:30.983825 master-0 kubenswrapper[8018]: I0217 15:14:30.983732 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:30.983825 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:30.983825 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:30.983825 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:30.984282 master-0 kubenswrapper[8018]: I0217 15:14:30.983857 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:31.983880 master-0 kubenswrapper[8018]: I0217 15:14:31.983786 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:31.983880 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:31.983880 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:31.983880 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:31.984630 master-0 kubenswrapper[8018]: I0217 15:14:31.983928 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:32.983825 master-0 kubenswrapper[8018]: I0217 15:14:32.983748 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:32.983825 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:32.983825 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:32.983825 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:32.985139 master-0 kubenswrapper[8018]: I0217 15:14:32.983851 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:33.982580 master-0 kubenswrapper[8018]: I0217 15:14:33.982433 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:33.982580 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:33.982580 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:33.982580 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:33.983208 master-0 kubenswrapper[8018]: I0217 15:14:33.982595 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:34.982772 master-0 kubenswrapper[8018]: I0217 15:14:34.982637 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:34.982772 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:34.982772 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:34.982772 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:34.983914 master-0 kubenswrapper[8018]: I0217 15:14:34.982815 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:35.981777 master-0 kubenswrapper[8018]: I0217 15:14:35.981689 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:35.981777 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:35.981777 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:35.981777 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:35.981777 master-0 kubenswrapper[8018]: I0217 15:14:35.981770 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:36.982185 master-0 kubenswrapper[8018]: I0217 15:14:36.982111 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:36.982185 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:36.982185 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:36.982185 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:36.983371 master-0 kubenswrapper[8018]: I0217 15:14:36.982201 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:37.301072 master-0 kubenswrapper[8018]: I0217 15:14:37.300863 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:37.301363 master-0 kubenswrapper[8018]: I0217 15:14:37.301198 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" containerName="installer" containerID="cri-o://36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e" gracePeriod=30 Feb 17 15:14:37.447378 master-0 kubenswrapper[8018]: I0217 15:14:37.447293 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:14:37.447709 master-0 kubenswrapper[8018]: E0217 15:14:37.447663 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:14:37.719548 master-0 kubenswrapper[8018]: I0217 15:14:37.718811 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:37.719548 master-0 kubenswrapper[8018]: I0217 15:14:37.718892 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:37.846601 master-0 kubenswrapper[8018]: I0217 15:14:37.846522 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_0c360914-72ac-4400-83b1-2cfb3ad10f3b/installer/0.log" Feb 17 15:14:37.846842 master-0 kubenswrapper[8018]: I0217 15:14:37.846636 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:37.979774 master-0 kubenswrapper[8018]: I0217 15:14:37.979663 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access\") pod \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " Feb 17 15:14:37.979774 master-0 kubenswrapper[8018]: I0217 15:14:37.979768 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir\") pod \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " Feb 17 15:14:37.980176 master-0 kubenswrapper[8018]: I0217 15:14:37.979835 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock\") pod \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\" (UID: \"0c360914-72ac-4400-83b1-2cfb3ad10f3b\") " Feb 17 15:14:37.980176 master-0 kubenswrapper[8018]: I0217 15:14:37.979937 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c360914-72ac-4400-83b1-2cfb3ad10f3b" (UID: "0c360914-72ac-4400-83b1-2cfb3ad10f3b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:37.980176 master-0 kubenswrapper[8018]: I0217 15:14:37.980087 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock" (OuterVolumeSpecName: "var-lock") pod "0c360914-72ac-4400-83b1-2cfb3ad10f3b" (UID: "0c360914-72ac-4400-83b1-2cfb3ad10f3b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:37.980767 master-0 kubenswrapper[8018]: I0217 15:14:37.980712 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:37.980854 master-0 kubenswrapper[8018]: I0217 15:14:37.980778 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c360914-72ac-4400-83b1-2cfb3ad10f3b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:37.981746 master-0 kubenswrapper[8018]: I0217 15:14:37.981694 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_0c360914-72ac-4400-83b1-2cfb3ad10f3b/installer/0.log" Feb 17 15:14:37.981855 master-0 kubenswrapper[8018]: I0217 15:14:37.981767 8018 generic.go:334] "Generic (PLEG): container finished" podID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" containerID="36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e" exitCode=1 Feb 17 15:14:37.981855 master-0 kubenswrapper[8018]: I0217 15:14:37.981812 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0c360914-72ac-4400-83b1-2cfb3ad10f3b","Type":"ContainerDied","Data":"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e"} Feb 17 15:14:37.981975 master-0 kubenswrapper[8018]: I0217 15:14:37.981884 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0c360914-72ac-4400-83b1-2cfb3ad10f3b","Type":"ContainerDied","Data":"847d0b3ef43c26f428b6f89391efb84649c6a5572ca012c4ce344eaebd768167"} Feb 17 15:14:37.981975 master-0 kubenswrapper[8018]: I0217 15:14:37.981943 8018 scope.go:117] "RemoveContainer" containerID="36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e" Feb 17 15:14:37.982271 master-0 kubenswrapper[8018]: I0217 15:14:37.982227 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 17 15:14:37.983445 master-0 kubenswrapper[8018]: I0217 15:14:37.983385 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:37.983445 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:37.983445 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:37.983445 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:37.983713 master-0 kubenswrapper[8018]: I0217 15:14:37.983500 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:37.984162 master-0 kubenswrapper[8018]: I0217 15:14:37.984108 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c360914-72ac-4400-83b1-2cfb3ad10f3b" (UID: "0c360914-72ac-4400-83b1-2cfb3ad10f3b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:38.035817 master-0 kubenswrapper[8018]: I0217 15:14:38.034687 8018 scope.go:117] "RemoveContainer" containerID="36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e" Feb 17 15:14:38.035817 master-0 kubenswrapper[8018]: E0217 15:14:38.035515 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e\": container with ID starting with 36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e not found: ID does not exist" containerID="36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e" Feb 17 15:14:38.035817 master-0 kubenswrapper[8018]: I0217 15:14:38.035590 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e"} err="failed to get container status \"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e\": rpc error: code = NotFound desc = could not find container \"36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e\": container with ID starting with 36b549abcfeff24571cfa5a9f9adc7c9fb7611813e7951cc95b9bc105fb7185e not found: ID does not exist" Feb 17 15:14:38.082519 master-0 kubenswrapper[8018]: I0217 15:14:38.082284 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c360914-72ac-4400-83b1-2cfb3ad10f3b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:38.344775 master-0 kubenswrapper[8018]: I0217 15:14:38.344579 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:38.351347 master-0 kubenswrapper[8018]: I0217 15:14:38.351281 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 17 15:14:38.983122 master-0 kubenswrapper[8018]: I0217 15:14:38.983021 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:38.983122 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:38.983122 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:38.983122 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:38.983122 master-0 kubenswrapper[8018]: I0217 15:14:38.983100 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:39.456018 master-0 kubenswrapper[8018]: I0217 15:14:39.455946 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" path="/var/lib/kubelet/pods/0c360914-72ac-4400-83b1-2cfb3ad10f3b/volumes" Feb 17 15:14:39.982860 master-0 kubenswrapper[8018]: I0217 15:14:39.982786 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:39.982860 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:39.982860 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:39.982860 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:39.983594 master-0 kubenswrapper[8018]: I0217 15:14:39.982861 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:40.988753 master-0 kubenswrapper[8018]: I0217 15:14:40.988640 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:40.988753 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:40.988753 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:40.988753 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:40.988753 master-0 kubenswrapper[8018]: I0217 15:14:40.988733 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:41.690015 master-0 kubenswrapper[8018]: I0217 15:14:41.689951 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 17 15:14:41.690407 master-0 kubenswrapper[8018]: E0217 15:14:41.690372 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" containerName="installer" Feb 17 15:14:41.690407 master-0 kubenswrapper[8018]: I0217 15:14:41.690405 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" containerName="installer" Feb 17 15:14:41.690713 master-0 kubenswrapper[8018]: I0217 15:14:41.690685 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c360914-72ac-4400-83b1-2cfb3ad10f3b" containerName="installer" Feb 17 15:14:41.691304 master-0 kubenswrapper[8018]: I0217 15:14:41.691277 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.703844 master-0 kubenswrapper[8018]: I0217 15:14:41.703780 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 17 15:14:41.845829 master-0 kubenswrapper[8018]: I0217 15:14:41.845753 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.845829 master-0 kubenswrapper[8018]: I0217 15:14:41.845830 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.846108 master-0 kubenswrapper[8018]: I0217 15:14:41.845919 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.947376 master-0 kubenswrapper[8018]: I0217 15:14:41.947245 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.947376 master-0 kubenswrapper[8018]: I0217 15:14:41.947295 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.947376 master-0 kubenswrapper[8018]: I0217 15:14:41.947357 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.947777 master-0 kubenswrapper[8018]: I0217 15:14:41.947504 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.948018 master-0 kubenswrapper[8018]: I0217 15:14:41.947969 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.977402 master-0 kubenswrapper[8018]: I0217 15:14:41.976523 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:41.982573 master-0 kubenswrapper[8018]: I0217 15:14:41.982490 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:41.982573 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:41.982573 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:41.982573 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:41.982972 master-0 kubenswrapper[8018]: I0217 15:14:41.982597 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:42.008108 master-0 kubenswrapper[8018]: I0217 15:14:42.008013 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:14:42.488493 master-0 kubenswrapper[8018]: I0217 15:14:42.481427 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 17 15:14:42.503055 master-0 kubenswrapper[8018]: W0217 15:14:42.502999 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd3daf534_9a77_49c6_964f_d402c5d5a2ac.slice/crio-82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b WatchSource:0}: Error finding container 82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b: Status 404 returned error can't find the container with id 82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b Feb 17 15:14:42.983128 master-0 kubenswrapper[8018]: I0217 15:14:42.983042 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:42.983128 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:42.983128 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:42.983128 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:42.983508 master-0 kubenswrapper[8018]: I0217 15:14:42.983132 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:43.027296 master-0 kubenswrapper[8018]: I0217 15:14:43.027249 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"d3daf534-9a77-49c6-964f-d402c5d5a2ac","Type":"ContainerStarted","Data":"82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b"} Feb 17 15:14:43.982818 master-0 kubenswrapper[8018]: I0217 15:14:43.982747 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:43.982818 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:43.982818 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:43.982818 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:43.982818 master-0 kubenswrapper[8018]: I0217 15:14:43.982817 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:44.040291 master-0 kubenswrapper[8018]: I0217 15:14:44.040119 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"d3daf534-9a77-49c6-964f-d402c5d5a2ac","Type":"ContainerStarted","Data":"30149bc76c51652722af3b42f468490ae630728bcc0813cbee77856ab297e313"} Feb 17 15:14:44.983255 master-0 kubenswrapper[8018]: I0217 15:14:44.983122 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:44.983255 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:44.983255 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:44.983255 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:44.983255 master-0 kubenswrapper[8018]: I0217 15:14:44.983212 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:45.983829 master-0 kubenswrapper[8018]: I0217 15:14:45.983720 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:45.983829 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:45.983829 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:45.983829 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:45.985166 master-0 kubenswrapper[8018]: I0217 15:14:45.984634 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:46.538427 master-0 kubenswrapper[8018]: I0217 15:14:46.538366 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_2d356564-2127-4da8-9074-13dd40019e26/installer/0.log" Feb 17 15:14:46.538679 master-0 kubenswrapper[8018]: I0217 15:14:46.538491 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:46.569954 master-0 kubenswrapper[8018]: I0217 15:14:46.569837 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=5.569816682 podStartE2EDuration="5.569816682s" podCreationTimestamp="2026-02-17 15:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:14:44.073213933 +0000 UTC m=+716.825557023" watchObservedRunningTime="2026-02-17 15:14:46.569816682 +0000 UTC m=+719.322159742" Feb 17 15:14:46.726168 master-0 kubenswrapper[8018]: I0217 15:14:46.726042 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access\") pod \"2d356564-2127-4da8-9074-13dd40019e26\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " Feb 17 15:14:46.726392 master-0 kubenswrapper[8018]: I0217 15:14:46.726204 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir\") pod \"2d356564-2127-4da8-9074-13dd40019e26\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " Feb 17 15:14:46.726392 master-0 kubenswrapper[8018]: I0217 15:14:46.726315 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock\") pod \"2d356564-2127-4da8-9074-13dd40019e26\" (UID: \"2d356564-2127-4da8-9074-13dd40019e26\") " Feb 17 15:14:46.726950 master-0 kubenswrapper[8018]: I0217 15:14:46.726585 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d356564-2127-4da8-9074-13dd40019e26" (UID: "2d356564-2127-4da8-9074-13dd40019e26"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:46.726950 master-0 kubenswrapper[8018]: I0217 15:14:46.726688 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock" (OuterVolumeSpecName: "var-lock") pod "2d356564-2127-4da8-9074-13dd40019e26" (UID: "2d356564-2127-4da8-9074-13dd40019e26"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:46.726950 master-0 kubenswrapper[8018]: I0217 15:14:46.726905 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:46.726950 master-0 kubenswrapper[8018]: I0217 15:14:46.726926 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d356564-2127-4da8-9074-13dd40019e26-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:46.731106 master-0 kubenswrapper[8018]: I0217 15:14:46.731037 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d356564-2127-4da8-9074-13dd40019e26" (UID: "2d356564-2127-4da8-9074-13dd40019e26"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:46.828728 master-0 kubenswrapper[8018]: I0217 15:14:46.828661 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d356564-2127-4da8-9074-13dd40019e26-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:46.982479 master-0 kubenswrapper[8018]: I0217 15:14:46.982335 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:46.982479 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:46.982479 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:46.982479 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:46.982479 master-0 kubenswrapper[8018]: I0217 15:14:46.982388 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:47.067629 master-0 kubenswrapper[8018]: I0217 15:14:47.067568 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_2d356564-2127-4da8-9074-13dd40019e26/installer/0.log" Feb 17 15:14:47.068224 master-0 kubenswrapper[8018]: I0217 15:14:47.067663 8018 generic.go:334] "Generic (PLEG): container finished" podID="2d356564-2127-4da8-9074-13dd40019e26" containerID="55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857" exitCode=1 Feb 17 15:14:47.068224 master-0 kubenswrapper[8018]: I0217 15:14:47.067709 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"2d356564-2127-4da8-9074-13dd40019e26","Type":"ContainerDied","Data":"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857"} Feb 17 15:14:47.068224 master-0 kubenswrapper[8018]: I0217 15:14:47.067765 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"2d356564-2127-4da8-9074-13dd40019e26","Type":"ContainerDied","Data":"1316976afbb94d8ce0d2bba9ea4633fbdd18ee35524ef1d44e0aa9fda6ea6d1d"} Feb 17 15:14:47.068224 master-0 kubenswrapper[8018]: I0217 15:14:47.067789 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 17 15:14:47.068224 master-0 kubenswrapper[8018]: I0217 15:14:47.067797 8018 scope.go:117] "RemoveContainer" containerID="55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857" Feb 17 15:14:47.093611 master-0 kubenswrapper[8018]: I0217 15:14:47.093575 8018 scope.go:117] "RemoveContainer" containerID="55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857" Feb 17 15:14:47.094620 master-0 kubenswrapper[8018]: E0217 15:14:47.094559 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857\": container with ID starting with 55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857 not found: ID does not exist" containerID="55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857" Feb 17 15:14:47.094686 master-0 kubenswrapper[8018]: I0217 15:14:47.094630 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857"} err="failed to get container status \"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857\": rpc error: code = NotFound desc = could not find container \"55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857\": container with ID starting with 55f2fdee336abd079fe6cac03e4ec155db892d11423020e530a6211947e92857 not found: ID does not exist" Feb 17 15:14:47.124519 master-0 kubenswrapper[8018]: I0217 15:14:47.124415 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:47.128658 master-0 kubenswrapper[8018]: I0217 15:14:47.128620 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 17 15:14:47.449967 master-0 kubenswrapper[8018]: I0217 15:14:47.449909 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d356564-2127-4da8-9074-13dd40019e26" path="/var/lib/kubelet/pods/2d356564-2127-4da8-9074-13dd40019e26/volumes" Feb 17 15:14:47.982859 master-0 kubenswrapper[8018]: I0217 15:14:47.982756 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:47.982859 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:47.982859 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:47.982859 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:47.983802 master-0 kubenswrapper[8018]: I0217 15:14:47.982998 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:48.984258 master-0 kubenswrapper[8018]: I0217 15:14:48.984181 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:48.984258 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:48.984258 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:48.984258 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:48.985577 master-0 kubenswrapper[8018]: I0217 15:14:48.985517 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:49.984779 master-0 kubenswrapper[8018]: I0217 15:14:49.984681 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:49.984779 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:49.984779 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:49.984779 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:49.986005 master-0 kubenswrapper[8018]: I0217 15:14:49.984790 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:50.983207 master-0 kubenswrapper[8018]: I0217 15:14:50.983096 8018 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:14:50.983207 master-0 kubenswrapper[8018]: [-]has-synced failed: reason withheld Feb 17 15:14:50.983207 master-0 kubenswrapper[8018]: [+]process-running ok Feb 17 15:14:50.983207 master-0 kubenswrapper[8018]: healthz check failed Feb 17 15:14:50.983708 master-0 kubenswrapper[8018]: I0217 15:14:50.983232 8018 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:14:50.983708 master-0 kubenswrapper[8018]: I0217 15:14:50.983335 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:14:50.984327 master-0 kubenswrapper[8018]: I0217 15:14:50.984269 8018 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"860736c555e36eb357d7747028619f7c30730d9978a45e3a5c0a43cdd4bd9ba8"} pod="openshift-ingress/router-default-864ddd5f56-g8w2f" containerMessage="Container router failed startup probe, will be restarted" Feb 17 15:14:50.984412 master-0 kubenswrapper[8018]: I0217 15:14:50.984344 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" containerID="cri-o://860736c555e36eb357d7747028619f7c30730d9978a45e3a5c0a43cdd4bd9ba8" gracePeriod=3600 Feb 17 15:14:52.440330 master-0 kubenswrapper[8018]: I0217 15:14:52.440261 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:14:52.441229 master-0 kubenswrapper[8018]: E0217 15:14:52.440654 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:14:54.596271 master-0 kubenswrapper[8018]: I0217 15:14:54.596174 8018 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 17 15:14:54.597193 master-0 kubenswrapper[8018]: I0217 15:14:54.596578 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" containerID="cri-o://2a42298516500c9bfa084c410231d2a27dee7fceed15779f0b27fd9d1349b2b0" gracePeriod=30 Feb 17 15:14:54.599324 master-0 kubenswrapper[8018]: I0217 15:14:54.599246 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:14:54.599898 master-0 kubenswrapper[8018]: E0217 15:14:54.599847 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d356564-2127-4da8-9074-13dd40019e26" containerName="installer" Feb 17 15:14:54.599898 master-0 kubenswrapper[8018]: I0217 15:14:54.599886 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d356564-2127-4da8-9074-13dd40019e26" containerName="installer" Feb 17 15:14:54.600081 master-0 kubenswrapper[8018]: E0217 15:14:54.599919 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.600081 master-0 kubenswrapper[8018]: I0217 15:14:54.599934 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.600081 master-0 kubenswrapper[8018]: E0217 15:14:54.599982 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.600081 master-0 kubenswrapper[8018]: I0217 15:14:54.599999 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.600568 master-0 kubenswrapper[8018]: I0217 15:14:54.600519 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.600568 master-0 kubenswrapper[8018]: I0217 15:14:54.600561 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d356564-2127-4da8-9074-13dd40019e26" containerName="installer" Feb 17 15:14:54.601064 master-0 kubenswrapper[8018]: I0217 15:14:54.601020 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:14:54.604450 master-0 kubenswrapper[8018]: I0217 15:14:54.604387 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.656951 master-0 kubenswrapper[8018]: I0217 15:14:54.656865 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.657166 master-0 kubenswrapper[8018]: I0217 15:14:54.656979 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.758190 master-0 kubenswrapper[8018]: I0217 15:14:54.758108 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.758337 master-0 kubenswrapper[8018]: I0217 15:14:54.758248 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.758493 master-0 kubenswrapper[8018]: I0217 15:14:54.758416 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.758594 master-0 kubenswrapper[8018]: I0217 15:14:54.758533 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.772888 master-0 kubenswrapper[8018]: I0217 15:14:54.772833 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:54.777610 master-0 kubenswrapper[8018]: I0217 15:14:54.777551 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:14:54.779862 master-0 kubenswrapper[8018]: I0217 15:14:54.779802 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:14:54.801875 master-0 kubenswrapper[8018]: I0217 15:14:54.801760 8018 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="34550cc8-e37c-417a-98ae-78b1477771db" Feb 17 15:14:54.813568 master-0 kubenswrapper[8018]: W0217 15:14:54.813506 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a WatchSource:0}: Error finding container c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a: Status 404 returned error can't find the container with id c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a Feb 17 15:14:54.860743 master-0 kubenswrapper[8018]: I0217 15:14:54.860576 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 17 15:14:54.860743 master-0 kubenswrapper[8018]: I0217 15:14:54.860672 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 17 15:14:54.861324 master-0 kubenswrapper[8018]: I0217 15:14:54.861262 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets" (OuterVolumeSpecName: "secrets") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:54.861411 master-0 kubenswrapper[8018]: I0217 15:14:54.861345 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs" (OuterVolumeSpecName: "logs") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:54.962401 master-0 kubenswrapper[8018]: I0217 15:14:54.962348 8018 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:54.962401 master-0 kubenswrapper[8018]: I0217 15:14:54.962389 8018 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:55.136976 master-0 kubenswrapper[8018]: I0217 15:14:55.136915 8018 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="2a42298516500c9bfa084c410231d2a27dee7fceed15779f0b27fd9d1349b2b0" exitCode=0 Feb 17 15:14:55.137139 master-0 kubenswrapper[8018]: I0217 15:14:55.137057 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 17 15:14:55.137208 master-0 kubenswrapper[8018]: I0217 15:14:55.137134 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427" Feb 17 15:14:55.137208 master-0 kubenswrapper[8018]: I0217 15:14:55.137168 8018 scope.go:117] "RemoveContainer" containerID="4944adde3c461c436bd108e43bf28aecebbade517fd0bca757eeee8a5f2db7dc" Feb 17 15:14:55.139877 master-0 kubenswrapper[8018]: I0217 15:14:55.139825 8018 generic.go:334] "Generic (PLEG): container finished" podID="69b452fc-5e99-4947-a722-e47a602ac144" containerID="6b14f00d7fcb44fb3296b9acab65074a4551627d03279119eef48d40dd8b3ddd" exitCode=0 Feb 17 15:14:55.140796 master-0 kubenswrapper[8018]: I0217 15:14:55.139941 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"69b452fc-5e99-4947-a722-e47a602ac144","Type":"ContainerDied","Data":"6b14f00d7fcb44fb3296b9acab65074a4551627d03279119eef48d40dd8b3ddd"} Feb 17 15:14:55.143066 master-0 kubenswrapper[8018]: I0217 15:14:55.143007 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821"} Feb 17 15:14:55.143066 master-0 kubenswrapper[8018]: I0217 15:14:55.143059 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a"} Feb 17 15:14:55.460048 master-0 kubenswrapper[8018]: I0217 15:14:55.459952 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9460ca0802075a8a6a10d7b3e6052c4d" path="/var/lib/kubelet/pods/9460ca0802075a8a6a10d7b3e6052c4d/volumes" Feb 17 15:14:55.460671 master-0 kubenswrapper[8018]: I0217 15:14:55.460619 8018 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 17 15:14:55.474380 master-0 kubenswrapper[8018]: I0217 15:14:55.474318 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 17 15:14:55.474662 master-0 kubenswrapper[8018]: I0217 15:14:55.474389 8018 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="34550cc8-e37c-417a-98ae-78b1477771db" Feb 17 15:14:55.477840 master-0 kubenswrapper[8018]: I0217 15:14:55.477780 8018 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 17 15:14:55.477840 master-0 kubenswrapper[8018]: I0217 15:14:55.477823 8018 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="34550cc8-e37c-417a-98ae-78b1477771db" Feb 17 15:14:56.152250 master-0 kubenswrapper[8018]: I0217 15:14:56.152166 8018 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821" exitCode=0 Feb 17 15:14:56.152831 master-0 kubenswrapper[8018]: I0217 15:14:56.152263 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821"} Feb 17 15:14:56.152831 master-0 kubenswrapper[8018]: I0217 15:14:56.152304 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933"} Feb 17 15:14:56.152831 master-0 kubenswrapper[8018]: I0217 15:14:56.152327 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d"} Feb 17 15:14:56.152831 master-0 kubenswrapper[8018]: I0217 15:14:56.152346 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387"} Feb 17 15:14:56.153721 master-0 kubenswrapper[8018]: I0217 15:14:56.153680 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:14:56.191489 master-0 kubenswrapper[8018]: I0217 15:14:56.191259 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.191234493 podStartE2EDuration="2.191234493s" podCreationTimestamp="2026-02-17 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:14:56.185844308 +0000 UTC m=+728.938187388" watchObservedRunningTime="2026-02-17 15:14:56.191234493 +0000 UTC m=+728.943577543" Feb 17 15:14:56.544191 master-0 kubenswrapper[8018]: I0217 15:14:56.544042 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:56.695158 master-0 kubenswrapper[8018]: I0217 15:14:56.695065 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir\") pod \"69b452fc-5e99-4947-a722-e47a602ac144\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " Feb 17 15:14:56.695158 master-0 kubenswrapper[8018]: I0217 15:14:56.695147 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock\") pod \"69b452fc-5e99-4947-a722-e47a602ac144\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " Feb 17 15:14:56.695528 master-0 kubenswrapper[8018]: I0217 15:14:56.695286 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access\") pod \"69b452fc-5e99-4947-a722-e47a602ac144\" (UID: \"69b452fc-5e99-4947-a722-e47a602ac144\") " Feb 17 15:14:56.695528 master-0 kubenswrapper[8018]: I0217 15:14:56.695287 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69b452fc-5e99-4947-a722-e47a602ac144" (UID: "69b452fc-5e99-4947-a722-e47a602ac144"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:56.695528 master-0 kubenswrapper[8018]: I0217 15:14:56.695353 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock" (OuterVolumeSpecName: "var-lock") pod "69b452fc-5e99-4947-a722-e47a602ac144" (UID: "69b452fc-5e99-4947-a722-e47a602ac144"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:14:56.695964 master-0 kubenswrapper[8018]: I0217 15:14:56.695916 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:56.695964 master-0 kubenswrapper[8018]: I0217 15:14:56.695956 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69b452fc-5e99-4947-a722-e47a602ac144-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:56.700807 master-0 kubenswrapper[8018]: I0217 15:14:56.700739 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69b452fc-5e99-4947-a722-e47a602ac144" (UID: "69b452fc-5e99-4947-a722-e47a602ac144"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:14:56.798151 master-0 kubenswrapper[8018]: I0217 15:14:56.797972 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69b452fc-5e99-4947-a722-e47a602ac144-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:14:57.173077 master-0 kubenswrapper[8018]: I0217 15:14:57.172990 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:14:57.175132 master-0 kubenswrapper[8018]: I0217 15:14:57.172985 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"69b452fc-5e99-4947-a722-e47a602ac144","Type":"ContainerDied","Data":"4ee1ada2125277c0b6cce472a26bd7b393be00724a19ccb2e1067f7f0c7cb926"} Feb 17 15:14:57.175132 master-0 kubenswrapper[8018]: I0217 15:14:57.173189 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee1ada2125277c0b6cce472a26bd7b393be00724a19ccb2e1067f7f0c7cb926" Feb 17 15:14:57.726865 master-0 kubenswrapper[8018]: I0217 15:14:57.726762 8018 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:57.733611 master-0 kubenswrapper[8018]: I0217 15:14:57.733515 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:14:58.187608 master-0 kubenswrapper[8018]: I0217 15:14:58.187515 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:14:58.188632 master-0 kubenswrapper[8018]: E0217 15:14:58.187898 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:14:58.188632 master-0 kubenswrapper[8018]: I0217 15:14:58.187919 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:14:58.188632 master-0 kubenswrapper[8018]: I0217 15:14:58.188120 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:14:58.189224 master-0 kubenswrapper[8018]: I0217 15:14:58.189166 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.192163 master-0 kubenswrapper[8018]: I0217 15:14:58.192073 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:14:58.192437 master-0 kubenswrapper[8018]: I0217 15:14:58.192379 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:14:58.212979 master-0 kubenswrapper[8018]: I0217 15:14:58.208801 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:14:58.324008 master-0 kubenswrapper[8018]: I0217 15:14:58.323932 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.324259 master-0 kubenswrapper[8018]: I0217 15:14:58.324084 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.324259 master-0 kubenswrapper[8018]: I0217 15:14:58.324203 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.425008 master-0 kubenswrapper[8018]: I0217 15:14:58.424928 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.425269 master-0 kubenswrapper[8018]: I0217 15:14:58.425037 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.425269 master-0 kubenswrapper[8018]: I0217 15:14:58.425122 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.425410 master-0 kubenswrapper[8018]: I0217 15:14:58.425285 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.425410 master-0 kubenswrapper[8018]: I0217 15:14:58.425351 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.462092 master-0 kubenswrapper[8018]: I0217 15:14:58.461952 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:58.533577 master-0 kubenswrapper[8018]: I0217 15:14:58.533513 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:14:59.013049 master-0 kubenswrapper[8018]: I0217 15:14:59.012985 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:14:59.018474 master-0 kubenswrapper[8018]: W0217 15:14:59.018384 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0f668c36_2d45_4b5d_89df_b8ed9bf97640.slice/crio-d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10 WatchSource:0}: Error finding container d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10: Status 404 returned error can't find the container with id d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10 Feb 17 15:14:59.210805 master-0 kubenswrapper[8018]: I0217 15:14:59.210739 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0f668c36-2d45-4b5d-89df-b8ed9bf97640","Type":"ContainerStarted","Data":"d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10"} Feb 17 15:15:00.153810 master-0 kubenswrapper[8018]: I0217 15:15:00.153731 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq"] Feb 17 15:15:00.154972 master-0 kubenswrapper[8018]: I0217 15:15:00.154920 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.156755 master-0 kubenswrapper[8018]: I0217 15:15:00.156704 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:15:00.158082 master-0 kubenswrapper[8018]: I0217 15:15:00.158042 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-fqc4f" Feb 17 15:15:00.174314 master-0 kubenswrapper[8018]: I0217 15:15:00.174264 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq"] Feb 17 15:15:00.222307 master-0 kubenswrapper[8018]: I0217 15:15:00.222252 8018 generic.go:334] "Generic (PLEG): container finished" podID="8385a176-0e12-47ef-862e-8331e6734b9c" containerID="4adf8d0f12db14b67c44e524b550b78d1fa8f334eecf810d58480ad559d615cc" exitCode=0 Feb 17 15:15:00.222983 master-0 kubenswrapper[8018]: I0217 15:15:00.222374 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" event={"ID":"8385a176-0e12-47ef-862e-8331e6734b9c","Type":"ContainerDied","Data":"4adf8d0f12db14b67c44e524b550b78d1fa8f334eecf810d58480ad559d615cc"} Feb 17 15:15:00.223200 master-0 kubenswrapper[8018]: I0217 15:15:00.223161 8018 scope.go:117] "RemoveContainer" containerID="4adf8d0f12db14b67c44e524b550b78d1fa8f334eecf810d58480ad559d615cc" Feb 17 15:15:00.225266 master-0 kubenswrapper[8018]: I0217 15:15:00.225213 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0f668c36-2d45-4b5d-89df-b8ed9bf97640","Type":"ContainerStarted","Data":"519d1203af8804e92975790999e67e332c40c53fef9042a5717966b5713e6e0d"} Feb 17 15:15:00.255295 master-0 kubenswrapper[8018]: I0217 15:15:00.255244 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.255740 master-0 kubenswrapper[8018]: I0217 15:15:00.255712 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6nxn\" (UniqueName: \"kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.255973 master-0 kubenswrapper[8018]: I0217 15:15:00.255947 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.270480 master-0 kubenswrapper[8018]: I0217 15:15:00.270355 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.270332185 podStartE2EDuration="2.270332185s" podCreationTimestamp="2026-02-17 15:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:00.267067522 +0000 UTC m=+733.019410662" watchObservedRunningTime="2026-02-17 15:15:00.270332185 +0000 UTC m=+733.022675245" Feb 17 15:15:00.357887 master-0 kubenswrapper[8018]: I0217 15:15:00.357832 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.358066 master-0 kubenswrapper[8018]: I0217 15:15:00.357902 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6nxn\" (UniqueName: \"kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.358241 master-0 kubenswrapper[8018]: I0217 15:15:00.358180 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.359151 master-0 kubenswrapper[8018]: I0217 15:15:00.359087 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.367675 master-0 kubenswrapper[8018]: I0217 15:15:00.367622 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.377341 master-0 kubenswrapper[8018]: I0217 15:15:00.377273 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6nxn\" (UniqueName: \"kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn\") pod \"collect-profiles-29522355-rfrsq\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.476519 master-0 kubenswrapper[8018]: I0217 15:15:00.476324 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:00.901217 master-0 kubenswrapper[8018]: I0217 15:15:00.901135 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq"] Feb 17 15:15:00.911299 master-0 kubenswrapper[8018]: W0217 15:15:00.911228 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5899ff_327d_4944_b3ae_84d82973d0a5.slice/crio-c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43 WatchSource:0}: Error finding container c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43: Status 404 returned error can't find the container with id c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43 Feb 17 15:15:01.232483 master-0 kubenswrapper[8018]: I0217 15:15:01.232394 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" event={"ID":"8385a176-0e12-47ef-862e-8331e6734b9c","Type":"ContainerStarted","Data":"63b575a804e72655ced16bcc941e2b9177cdd09599ac63c753ec63de5fa8b0bf"} Feb 17 15:15:01.233849 master-0 kubenswrapper[8018]: I0217 15:15:01.233794 8018 generic.go:334] "Generic (PLEG): container finished" podID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerID="3c59779e2c3acceff9a6741b9ce7f2f36e0bae77e413da5b192e5056ce1e9f29" exitCode=0 Feb 17 15:15:01.234223 master-0 kubenswrapper[8018]: I0217 15:15:01.234192 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" event={"ID":"ee5899ff-327d-4944-b3ae-84d82973d0a5","Type":"ContainerDied","Data":"3c59779e2c3acceff9a6741b9ce7f2f36e0bae77e413da5b192e5056ce1e9f29"} Feb 17 15:15:01.234312 master-0 kubenswrapper[8018]: I0217 15:15:01.234226 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" event={"ID":"ee5899ff-327d-4944-b3ae-84d82973d0a5","Type":"ContainerStarted","Data":"c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43"} Feb 17 15:15:02.669216 master-0 kubenswrapper[8018]: I0217 15:15:02.669142 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:02.702368 master-0 kubenswrapper[8018]: I0217 15:15:02.702289 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume\") pod \"ee5899ff-327d-4944-b3ae-84d82973d0a5\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " Feb 17 15:15:02.702655 master-0 kubenswrapper[8018]: I0217 15:15:02.702427 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6nxn\" (UniqueName: \"kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn\") pod \"ee5899ff-327d-4944-b3ae-84d82973d0a5\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " Feb 17 15:15:02.702655 master-0 kubenswrapper[8018]: I0217 15:15:02.702572 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume\") pod \"ee5899ff-327d-4944-b3ae-84d82973d0a5\" (UID: \"ee5899ff-327d-4944-b3ae-84d82973d0a5\") " Feb 17 15:15:02.704558 master-0 kubenswrapper[8018]: I0217 15:15:02.703325 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee5899ff-327d-4944-b3ae-84d82973d0a5" (UID: "ee5899ff-327d-4944-b3ae-84d82973d0a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:15:02.708642 master-0 kubenswrapper[8018]: I0217 15:15:02.708563 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee5899ff-327d-4944-b3ae-84d82973d0a5" (UID: "ee5899ff-327d-4944-b3ae-84d82973d0a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:15:02.722594 master-0 kubenswrapper[8018]: I0217 15:15:02.722190 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn" (OuterVolumeSpecName: "kube-api-access-d6nxn") pod "ee5899ff-327d-4944-b3ae-84d82973d0a5" (UID: "ee5899ff-327d-4944-b3ae-84d82973d0a5"). InnerVolumeSpecName "kube-api-access-d6nxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:15:02.804093 master-0 kubenswrapper[8018]: I0217 15:15:02.804024 8018 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee5899ff-327d-4944-b3ae-84d82973d0a5-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:02.804093 master-0 kubenswrapper[8018]: I0217 15:15:02.804083 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6nxn\" (UniqueName: \"kubernetes.io/projected/ee5899ff-327d-4944-b3ae-84d82973d0a5-kube-api-access-d6nxn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:02.804093 master-0 kubenswrapper[8018]: I0217 15:15:02.804101 8018 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5899ff-327d-4944-b3ae-84d82973d0a5-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:03.100540 master-0 kubenswrapper[8018]: E0217 15:15:03.100406 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-6bhf8" podUID="6d56f334-6c7b-4c92-9665-56300d44f9a3" Feb 17 15:15:03.260338 master-0 kubenswrapper[8018]: I0217 15:15:03.260086 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:03.260338 master-0 kubenswrapper[8018]: I0217 15:15:03.260148 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:03.260338 master-0 kubenswrapper[8018]: I0217 15:15:03.260147 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" event={"ID":"ee5899ff-327d-4944-b3ae-84d82973d0a5","Type":"ContainerDied","Data":"c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43"} Feb 17 15:15:03.260338 master-0 kubenswrapper[8018]: I0217 15:15:03.260239 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43" Feb 17 15:15:06.165928 master-0 kubenswrapper[8018]: I0217 15:15:06.165781 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:06.171713 master-0 kubenswrapper[8018]: I0217 15:15:06.171647 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:06.264298 master-0 kubenswrapper[8018]: I0217 15:15:06.264230 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zhjq" Feb 17 15:15:06.276144 master-0 kubenswrapper[8018]: I0217 15:15:06.276089 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:06.441512 master-0 kubenswrapper[8018]: I0217 15:15:06.441248 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:15:06.442074 master-0 kubenswrapper[8018]: E0217 15:15:06.441953 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:15:06.830854 master-0 kubenswrapper[8018]: I0217 15:15:06.830660 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6bhf8"] Feb 17 15:15:06.849243 master-0 kubenswrapper[8018]: W0217 15:15:06.849051 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d56f334_6c7b_4c92_9665_56300d44f9a3.slice/crio-c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff WatchSource:0}: Error finding container c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff: Status 404 returned error can't find the container with id c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff Feb 17 15:15:06.857026 master-0 kubenswrapper[8018]: I0217 15:15:06.856891 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:15:07.057564 master-0 kubenswrapper[8018]: I0217 15:15:07.057498 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/3.log" Feb 17 15:15:07.255220 master-0 kubenswrapper[8018]: I0217 15:15:07.255123 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-g8w2f_a2d6e329-7ad8-4fc2-accc-66827f11743d/router/0.log" Feb 17 15:15:07.296128 master-0 kubenswrapper[8018]: I0217 15:15:07.296006 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6bhf8" event={"ID":"6d56f334-6c7b-4c92-9665-56300d44f9a3","Type":"ContainerStarted","Data":"698aee78dc1a9f1a308c2e9decbe95207cf5d8388f5bb3a8a5063daefc391e92"} Feb 17 15:15:07.296128 master-0 kubenswrapper[8018]: I0217 15:15:07.296066 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6bhf8" event={"ID":"6d56f334-6c7b-4c92-9665-56300d44f9a3","Type":"ContainerStarted","Data":"c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff"} Feb 17 15:15:07.320093 master-0 kubenswrapper[8018]: I0217 15:15:07.319970 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6bhf8" podStartSLOduration=251.319945522 podStartE2EDuration="4m11.319945522s" podCreationTimestamp="2026-02-17 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:07.317285096 +0000 UTC m=+740.069628186" watchObservedRunningTime="2026-02-17 15:15:07.319945522 +0000 UTC m=+740.072288602" Feb 17 15:15:07.463137 master-0 kubenswrapper[8018]: I0217 15:15:07.463080 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-g8w2f_a2d6e329-7ad8-4fc2-accc-66827f11743d/router/1.log" Feb 17 15:15:07.651180 master-0 kubenswrapper[8018]: I0217 15:15:07.651086 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/fix-audit-permissions/0.log" Feb 17 15:15:07.859865 master-0 kubenswrapper[8018]: I0217 15:15:07.859786 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/oauth-apiserver/0.log" Feb 17 15:15:08.051057 master-0 kubenswrapper[8018]: I0217 15:15:08.050957 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:15:08.259124 master-0 kubenswrapper[8018]: I0217 15:15:08.257713 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/3.log" Feb 17 15:15:08.451251 master-0 kubenswrapper[8018]: I0217 15:15:08.451194 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/setup/0.log" Feb 17 15:15:08.651043 master-0 kubenswrapper[8018]: I0217 15:15:08.650972 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-ensure-env-vars/0.log" Feb 17 15:15:08.860071 master-0 kubenswrapper[8018]: I0217 15:15:08.859999 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-resources-copy/0.log" Feb 17 15:15:09.048986 master-0 kubenswrapper[8018]: I0217 15:15:09.048870 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 17 15:15:09.257614 master-0 kubenswrapper[8018]: I0217 15:15:09.257531 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 17 15:15:09.456067 master-0 kubenswrapper[8018]: I0217 15:15:09.455988 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 17 15:15:09.649861 master-0 kubenswrapper[8018]: I0217 15:15:09.649774 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-readyz/0.log" Feb 17 15:15:09.851306 master-0 kubenswrapper[8018]: I0217 15:15:09.851088 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 17 15:15:10.060043 master-0 kubenswrapper[8018]: I0217 15:15:10.059979 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_5de71cc1-08c3-4295-ac86-745c9d4fbb46/installer/0.log" Feb 17 15:15:10.251022 master-0 kubenswrapper[8018]: I0217 15:15:10.250913 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:15:10.458713 master-0 kubenswrapper[8018]: I0217 15:15:10.458588 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/3.log" Feb 17 15:15:10.650712 master-0 kubenswrapper[8018]: I0217 15:15:10.650627 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/setup/0.log" Feb 17 15:15:10.861103 master-0 kubenswrapper[8018]: I0217 15:15:10.861033 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver/0.log" Feb 17 15:15:11.051029 master-0 kubenswrapper[8018]: I0217 15:15:11.050832 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver-insecure-readyz/0.log" Feb 17 15:15:11.254425 master-0 kubenswrapper[8018]: I0217 15:15:11.254359 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:15:11.454869 master-0 kubenswrapper[8018]: I0217 15:15:11.454782 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_d3daf534-9a77-49c6-964f-d402c5d5a2ac/installer/0.log" Feb 17 15:15:11.651997 master-0 kubenswrapper[8018]: I0217 15:15:11.651944 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:15:11.855115 master-0 kubenswrapper[8018]: I0217 15:15:11.854969 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_69b452fc-5e99-4947-a722-e47a602ac144/installer/0.log" Feb 17 15:15:12.050388 master-0 kubenswrapper[8018]: I0217 15:15:12.050276 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/wait-for-host-port/0.log" Feb 17 15:15:12.259215 master-0 kubenswrapper[8018]: I0217 15:15:12.259049 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler/0.log" Feb 17 15:15:12.456304 master-0 kubenswrapper[8018]: I0217 15:15:12.456129 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-cert-syncer/0.log" Feb 17 15:15:12.652186 master-0 kubenswrapper[8018]: I0217 15:15:12.652109 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-recovery-controller/0.log" Feb 17 15:15:13.079316 master-0 kubenswrapper[8018]: I0217 15:15:13.079167 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:15:13.095557 master-0 kubenswrapper[8018]: I0217 15:15:13.095486 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/2.log" Feb 17 15:15:13.251255 master-0 kubenswrapper[8018]: I0217 15:15:13.251177 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:15:13.463926 master-0 kubenswrapper[8018]: I0217 15:15:13.463848 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/2.log" Feb 17 15:15:13.651530 master-0 kubenswrapper[8018]: I0217 15:15:13.651452 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/fix-audit-permissions/0.log" Feb 17 15:15:13.861620 master-0 kubenswrapper[8018]: I0217 15:15:13.861563 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/openshift-apiserver/0.log" Feb 17 15:15:14.054669 master-0 kubenswrapper[8018]: I0217 15:15:14.054578 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bd884947c-tdlbn_1d481a79-f565-4c7f-84cc-207fc3117c23/openshift-apiserver-check-endpoints/0.log" Feb 17 15:15:14.184302 master-0 kubenswrapper[8018]: I0217 15:15:14.184140 8018 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:15:14.184595 master-0 kubenswrapper[8018]: I0217 15:15:14.184452 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-master-0" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" containerName="installer" containerID="cri-o://519d1203af8804e92975790999e67e332c40c53fef9042a5717966b5713e6e0d" gracePeriod=30 Feb 17 15:15:14.259381 master-0 kubenswrapper[8018]: I0217 15:15:14.259318 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:15:14.456677 master-0 kubenswrapper[8018]: I0217 15:15:14.456530 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/3.log" Feb 17 15:15:14.659927 master-0 kubenswrapper[8018]: I0217 15:15:14.659780 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-588944557d-kjh2v_08e27254-e906-484a-b346-036f898be3ae/catalog-operator/0.log" Feb 17 15:15:14.850366 master-0 kubenswrapper[8018]: I0217 15:15:14.850210 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29522340-8cp6h_2a162205-f111-49b4-9f46-0b40b6184336/collect-profiles/0.log" Feb 17 15:15:15.049993 master-0 kubenswrapper[8018]: I0217 15:15:15.049900 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29522355-rfrsq_ee5899ff-327d-4944-b3ae-84d82973d0a5/collect-profiles/0.log" Feb 17 15:15:15.258051 master-0 kubenswrapper[8018]: I0217 15:15:15.257961 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b56bd877c-tk8xm_257db04b-7203-4a1d-b3d4-bd4db258a3cc/olm-operator/0.log" Feb 17 15:15:15.656896 master-0 kubenswrapper[8018]: I0217 15:15:15.656814 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:15:15.849984 master-0 kubenswrapper[8018]: I0217 15:15:15.849907 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/kube-rbac-proxy/0.log" Feb 17 15:15:16.056119 master-0 kubenswrapper[8018]: I0217 15:15:16.055945 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/1.log" Feb 17 15:15:16.259126 master-0 kubenswrapper[8018]: I0217 15:15:16.259055 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-67d4dbd88b-szr25_b58e9d93-7683-440d-a603-9543e5455490/packageserver/0.log" Feb 17 15:15:16.586638 master-0 kubenswrapper[8018]: I0217 15:15:16.586546 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 17 15:15:16.587109 master-0 kubenswrapper[8018]: E0217 15:15:16.587054 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:16.587109 master-0 kubenswrapper[8018]: I0217 15:15:16.587093 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:16.587512 master-0 kubenswrapper[8018]: I0217 15:15:16.587447 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:16.588313 master-0 kubenswrapper[8018]: I0217 15:15:16.588276 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.606867 master-0 kubenswrapper[8018]: I0217 15:15:16.606774 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 17 15:15:16.651676 master-0 kubenswrapper[8018]: I0217 15:15:16.651603 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.651676 master-0 kubenswrapper[8018]: I0217 15:15:16.651673 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.652029 master-0 kubenswrapper[8018]: I0217 15:15:16.651813 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.753716 master-0 kubenswrapper[8018]: I0217 15:15:16.753582 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.753716 master-0 kubenswrapper[8018]: I0217 15:15:16.753690 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.753716 master-0 kubenswrapper[8018]: I0217 15:15:16.753707 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.754136 master-0 kubenswrapper[8018]: I0217 15:15:16.753792 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.754136 master-0 kubenswrapper[8018]: I0217 15:15:16.753892 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.797802 master-0 kubenswrapper[8018]: I0217 15:15:16.797722 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:16.922983 master-0 kubenswrapper[8018]: I0217 15:15:16.922922 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:17.365368 master-0 kubenswrapper[8018]: I0217 15:15:17.365327 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 17 15:15:17.369862 master-0 kubenswrapper[8018]: W0217 15:15:17.369804 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3b6a099_f52a_428a_af09_d1842ce66891.slice/crio-b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a WatchSource:0}: Error finding container b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a: Status 404 returned error can't find the container with id b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a Feb 17 15:15:17.388968 master-0 kubenswrapper[8018]: I0217 15:15:17.388923 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a3b6a099-f52a-428a-af09-d1842ce66891","Type":"ContainerStarted","Data":"b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a"} Feb 17 15:15:17.505946 master-0 kubenswrapper[8018]: I0217 15:15:17.505852 8018 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 17 15:15:17.507356 master-0 kubenswrapper[8018]: I0217 15:15:17.507306 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.509847 master-0 kubenswrapper[8018]: I0217 15:15:17.509809 8018 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-tmw8w" Feb 17 15:15:17.512017 master-0 kubenswrapper[8018]: I0217 15:15:17.511972 8018 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 17 15:15:17.517039 master-0 kubenswrapper[8018]: I0217 15:15:17.516970 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 17 15:15:17.569678 master-0 kubenswrapper[8018]: I0217 15:15:17.569597 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.569948 master-0 kubenswrapper[8018]: I0217 15:15:17.569908 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.570277 master-0 kubenswrapper[8018]: I0217 15:15:17.570178 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.672424 master-0 kubenswrapper[8018]: I0217 15:15:17.672366 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.672981 master-0 kubenswrapper[8018]: I0217 15:15:17.672480 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.672981 master-0 kubenswrapper[8018]: I0217 15:15:17.672528 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.673446 master-0 kubenswrapper[8018]: I0217 15:15:17.673402 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.673539 master-0 kubenswrapper[8018]: I0217 15:15:17.673508 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.698332 master-0 kubenswrapper[8018]: I0217 15:15:17.698281 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:17.887723 master-0 kubenswrapper[8018]: I0217 15:15:17.887636 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:18.368982 master-0 kubenswrapper[8018]: I0217 15:15:18.367803 8018 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 17 15:15:18.372770 master-0 kubenswrapper[8018]: W0217 15:15:18.371632 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70e43034_56d0_4fb2_8886_deb00b625686.slice/crio-5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5 WatchSource:0}: Error finding container 5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5: Status 404 returned error can't find the container with id 5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5 Feb 17 15:15:18.405546 master-0 kubenswrapper[8018]: I0217 15:15:18.405414 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"70e43034-56d0-4fb2-8886-deb00b625686","Type":"ContainerStarted","Data":"5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5"} Feb 17 15:15:18.408828 master-0 kubenswrapper[8018]: I0217 15:15:18.408727 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a3b6a099-f52a-428a-af09-d1842ce66891","Type":"ContainerStarted","Data":"ceb525f1242f942ba65ca3fefc2acf99f57e68a8145b1bffbd29b61c0bf59b29"} Feb 17 15:15:18.433581 master-0 kubenswrapper[8018]: I0217 15:15:18.433432 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.4334027320000002 podStartE2EDuration="2.433402732s" podCreationTimestamp="2026-02-17 15:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:18.432296724 +0000 UTC m=+751.184639824" watchObservedRunningTime="2026-02-17 15:15:18.433402732 +0000 UTC m=+751.185745812" Feb 17 15:15:18.441033 master-0 kubenswrapper[8018]: I0217 15:15:18.440959 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:15:18.441407 master-0 kubenswrapper[8018]: E0217 15:15:18.441350 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:15:19.420686 master-0 kubenswrapper[8018]: I0217 15:15:19.420556 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"70e43034-56d0-4fb2-8886-deb00b625686","Type":"ContainerStarted","Data":"762936faf720fbf8fc66c224dfa462878affad1249ed16705950254bc5043c3c"} Feb 17 15:15:19.446995 master-0 kubenswrapper[8018]: I0217 15:15:19.446817 8018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.446784815 podStartE2EDuration="2.446784815s" podCreationTimestamp="2026-02-17 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:19.444441377 +0000 UTC m=+752.196784467" watchObservedRunningTime="2026-02-17 15:15:19.446784815 +0000 UTC m=+752.199127895" Feb 17 15:15:30.524041 master-0 kubenswrapper[8018]: I0217 15:15:30.523859 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_0f668c36-2d45-4b5d-89df-b8ed9bf97640/installer/0.log" Feb 17 15:15:30.524041 master-0 kubenswrapper[8018]: I0217 15:15:30.523949 8018 generic.go:334] "Generic (PLEG): container finished" podID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" containerID="519d1203af8804e92975790999e67e332c40c53fef9042a5717966b5713e6e0d" exitCode=1 Feb 17 15:15:30.524041 master-0 kubenswrapper[8018]: I0217 15:15:30.523995 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0f668c36-2d45-4b5d-89df-b8ed9bf97640","Type":"ContainerDied","Data":"519d1203af8804e92975790999e67e332c40c53fef9042a5717966b5713e6e0d"} Feb 17 15:15:30.825014 master-0 kubenswrapper[8018]: I0217 15:15:30.824941 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_0f668c36-2d45-4b5d-89df-b8ed9bf97640/installer/0.log" Feb 17 15:15:30.825291 master-0 kubenswrapper[8018]: I0217 15:15:30.825049 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:15:30.903570 master-0 kubenswrapper[8018]: I0217 15:15:30.903329 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access\") pod \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " Feb 17 15:15:30.903570 master-0 kubenswrapper[8018]: I0217 15:15:30.903474 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir\") pod \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " Feb 17 15:15:30.903570 master-0 kubenswrapper[8018]: I0217 15:15:30.903567 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock\") pod \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\" (UID: \"0f668c36-2d45-4b5d-89df-b8ed9bf97640\") " Feb 17 15:15:30.904722 master-0 kubenswrapper[8018]: I0217 15:15:30.903964 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock" (OuterVolumeSpecName: "var-lock") pod "0f668c36-2d45-4b5d-89df-b8ed9bf97640" (UID: "0f668c36-2d45-4b5d-89df-b8ed9bf97640"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:30.904722 master-0 kubenswrapper[8018]: I0217 15:15:30.904012 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0f668c36-2d45-4b5d-89df-b8ed9bf97640" (UID: "0f668c36-2d45-4b5d-89df-b8ed9bf97640"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:30.907649 master-0 kubenswrapper[8018]: I0217 15:15:30.907577 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0f668c36-2d45-4b5d-89df-b8ed9bf97640" (UID: "0f668c36-2d45-4b5d-89df-b8ed9bf97640"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:15:31.006028 master-0 kubenswrapper[8018]: I0217 15:15:31.005942 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:31.006340 master-0 kubenswrapper[8018]: I0217 15:15:31.006063 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:31.006340 master-0 kubenswrapper[8018]: I0217 15:15:31.006131 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f668c36-2d45-4b5d-89df-b8ed9bf97640-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:31.212587 master-0 kubenswrapper[8018]: I0217 15:15:31.212433 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:15:31.213319 master-0 kubenswrapper[8018]: E0217 15:15:31.213022 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" containerName="installer" Feb 17 15:15:31.213319 master-0 kubenswrapper[8018]: I0217 15:15:31.213073 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" containerName="installer" Feb 17 15:15:31.214001 master-0 kubenswrapper[8018]: I0217 15:15:31.213951 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" containerName="installer" Feb 17 15:15:31.214682 master-0 kubenswrapper[8018]: I0217 15:15:31.214641 8018 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 17 15:15:31.214973 master-0 kubenswrapper[8018]: I0217 15:15:31.214883 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.215201 master-0 kubenswrapper[8018]: I0217 15:15:31.215099 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" containerID="cri-o://2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6" gracePeriod=15 Feb 17 15:15:31.215318 master-0 kubenswrapper[8018]: I0217 15:15:31.215211 8018 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b" gracePeriod=15 Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216329 8018 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: E0217 15:15:31.216646 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216666 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: E0217 15:15:31.216694 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216709 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: E0217 15:15:31.216746 8018 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216760 8018 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216972 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.216996 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 17 15:15:31.219598 master-0 kubenswrapper[8018]: I0217 15:15:31.217020 8018 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 17 15:15:31.221213 master-0 kubenswrapper[8018]: I0217 15:15:31.220932 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.296842 master-0 kubenswrapper[8018]: E0217 15:15:31.296773 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.300572 master-0 kubenswrapper[8018]: E0217 15:15:31.300521 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.311624 master-0 kubenswrapper[8018]: I0217 15:15:31.311532 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.311736 master-0 kubenswrapper[8018]: I0217 15:15:31.311700 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.311941 master-0 kubenswrapper[8018]: I0217 15:15:31.311823 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.312139 master-0 kubenswrapper[8018]: I0217 15:15:31.312071 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.312276 master-0 kubenswrapper[8018]: I0217 15:15:31.312223 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.312520 master-0 kubenswrapper[8018]: I0217 15:15:31.312442 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.312611 master-0 kubenswrapper[8018]: I0217 15:15:31.312582 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.312689 master-0 kubenswrapper[8018]: I0217 15:15:31.312665 8018 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.413564 master-0 kubenswrapper[8018]: I0217 15:15:31.413505 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.413564 master-0 kubenswrapper[8018]: I0217 15:15:31.413569 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.413863 master-0 kubenswrapper[8018]: I0217 15:15:31.413592 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.413863 master-0 kubenswrapper[8018]: I0217 15:15:31.413629 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.413863 master-0 kubenswrapper[8018]: I0217 15:15:31.413740 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414187 master-0 kubenswrapper[8018]: I0217 15:15:31.413909 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.414187 master-0 kubenswrapper[8018]: I0217 15:15:31.414048 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414187 master-0 kubenswrapper[8018]: I0217 15:15:31.414066 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.414187 master-0 kubenswrapper[8018]: I0217 15:15:31.414136 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414202 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414328 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414340 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414384 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414413 8018 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414438 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.414611 master-0 kubenswrapper[8018]: I0217 15:15:31.414517 8018 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.535608 master-0 kubenswrapper[8018]: I0217 15:15:31.535520 8018 generic.go:334] "Generic (PLEG): container finished" podID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" containerID="30149bc76c51652722af3b42f468490ae630728bcc0813cbee77856ab297e313" exitCode=0 Feb 17 15:15:31.536326 master-0 kubenswrapper[8018]: I0217 15:15:31.535638 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"d3daf534-9a77-49c6-964f-d402c5d5a2ac","Type":"ContainerDied","Data":"30149bc76c51652722af3b42f468490ae630728bcc0813cbee77856ab297e313"} Feb 17 15:15:31.537229 master-0 kubenswrapper[8018]: I0217 15:15:31.537157 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:31.538733 master-0 kubenswrapper[8018]: I0217 15:15:31.538695 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_0f668c36-2d45-4b5d-89df-b8ed9bf97640/installer/0.log" Feb 17 15:15:31.538902 master-0 kubenswrapper[8018]: I0217 15:15:31.538859 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:15:31.539053 master-0 kubenswrapper[8018]: I0217 15:15:31.538863 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0f668c36-2d45-4b5d-89df-b8ed9bf97640","Type":"ContainerDied","Data":"d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10"} Feb 17 15:15:31.539161 master-0 kubenswrapper[8018]: I0217 15:15:31.539117 8018 scope.go:117] "RemoveContainer" containerID="519d1203af8804e92975790999e67e332c40c53fef9042a5717966b5713e6e0d" Feb 17 15:15:31.540256 master-0 kubenswrapper[8018]: I0217 15:15:31.540179 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:31.541065 master-0 kubenswrapper[8018]: I0217 15:15:31.540993 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:31.544638 master-0 kubenswrapper[8018]: I0217 15:15:31.544563 8018 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b" exitCode=0 Feb 17 15:15:31.545604 master-0 kubenswrapper[8018]: I0217 15:15:31.545519 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:31.546617 master-0 kubenswrapper[8018]: I0217 15:15:31.546549 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:31.597839 master-0 kubenswrapper[8018]: I0217 15:15:31.597700 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:31.601493 master-0 kubenswrapper[8018]: I0217 15:15:31.601393 8018 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:31.673603 master-0 kubenswrapper[8018]: E0217 15:15:31.673373 8018 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895118e624868c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebf941eaba3a97825b1c8002f4b27a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:31.67209901 +0000 UTC m=+764.424442070,LastTimestamp:2026-02-17 15:15:31.67209901 +0000 UTC m=+764.424442070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:15:31.676790 master-0 kubenswrapper[8018]: W0217 15:15:31.676728 8018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619e637b8575311b72d43b7b782d610a.slice/crio-037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61 WatchSource:0}: Error finding container 037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61: Status 404 returned error can't find the container with id 037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61 Feb 17 15:15:31.956404 master-0 kubenswrapper[8018]: E0217 15:15:31.956292 8018 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895118e624868c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebf941eaba3a97825b1c8002f4b27a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:31.67209901 +0000 UTC m=+764.424442070,LastTimestamp:2026-02-17 15:15:31.67209901 +0000 UTC m=+764.424442070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:15:32.441730 master-0 kubenswrapper[8018]: I0217 15:15:32.441669 8018 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:15:32.442084 master-0 kubenswrapper[8018]: E0217 15:15:32.442040 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" podUID="14723cb7-2d96-42b7-b559-70386c4c841c" Feb 17 15:15:32.555693 master-0 kubenswrapper[8018]: I0217 15:15:32.555429 8018 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="2128d8d38323586ed6d9716f5c0be6569fe807cb8c9948bb819a8f728039d87d" exitCode=0 Feb 17 15:15:32.555693 master-0 kubenswrapper[8018]: I0217 15:15:32.555572 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"2128d8d38323586ed6d9716f5c0be6569fe807cb8c9948bb819a8f728039d87d"} Feb 17 15:15:32.555693 master-0 kubenswrapper[8018]: I0217 15:15:32.555650 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61"} Feb 17 15:15:32.556894 master-0 kubenswrapper[8018]: E0217 15:15:32.556845 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:32.557228 master-0 kubenswrapper[8018]: I0217 15:15:32.557149 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.558689 master-0 kubenswrapper[8018]: I0217 15:15:32.558085 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.558689 master-0 kubenswrapper[8018]: I0217 15:15:32.558631 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/3.log" Feb 17 15:15:32.559680 master-0 kubenswrapper[8018]: I0217 15:15:32.559636 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/2.log" Feb 17 15:15:32.560298 master-0 kubenswrapper[8018]: I0217 15:15:32.560259 8018 generic.go:334] "Generic (PLEG): container finished" podID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" containerID="e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23" exitCode=1 Feb 17 15:15:32.560421 master-0 kubenswrapper[8018]: I0217 15:15:32.560338 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerDied","Data":"e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23"} Feb 17 15:15:32.560421 master-0 kubenswrapper[8018]: I0217 15:15:32.560406 8018 scope.go:117] "RemoveContainer" containerID="bbb9d291b17c271b0bfc02764b8ad63a5a4d80141787014fe49630e60a725084" Feb 17 15:15:32.561419 master-0 kubenswrapper[8018]: I0217 15:15:32.561289 8018 scope.go:117] "RemoveContainer" containerID="e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23" Feb 17 15:15:32.561803 master-0 kubenswrapper[8018]: I0217 15:15:32.561633 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.561885 master-0 kubenswrapper[8018]: E0217 15:15:32.561824 8018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nclxg_openshift-ingress-operator(22a30079-d7fc-49cf-882e-1c5022cb5bf6)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" Feb 17 15:15:32.562679 master-0 kubenswrapper[8018]: I0217 15:15:32.562594 8018 status_manager.go:851] "Failed to get status for pod" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-nclxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.563105 master-0 kubenswrapper[8018]: I0217 15:15:32.562985 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"4b556a21109d55e0fc1179b5cad47796ec1a964c7618f1e0977b12773c406661"} Feb 17 15:15:32.563105 master-0 kubenswrapper[8018]: I0217 15:15:32.563030 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"1c9e969e18b1411cff6ba15e9601c6a1a570693b9fa41b729154f36c3d4cfc86"} Feb 17 15:15:32.563696 master-0 kubenswrapper[8018]: I0217 15:15:32.563620 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.564347 master-0 kubenswrapper[8018]: E0217 15:15:32.564271 8018 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:32.564649 master-0 kubenswrapper[8018]: I0217 15:15:32.564594 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.565505 master-0 kubenswrapper[8018]: I0217 15:15:32.565409 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.566353 master-0 kubenswrapper[8018]: I0217 15:15:32.566273 8018 status_manager.go:851] "Failed to get status for pod" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-nclxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.988817 master-0 kubenswrapper[8018]: I0217 15:15:32.988750 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:32.989801 master-0 kubenswrapper[8018]: I0217 15:15:32.989749 8018 status_manager.go:851] "Failed to get status for pod" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.990243 master-0 kubenswrapper[8018]: I0217 15:15:32.990197 8018 status_manager.go:851] "Failed to get status for pod" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:32.990942 master-0 kubenswrapper[8018]: I0217 15:15:32.990885 8018 status_manager.go:851] "Failed to get status for pod" podUID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-nclxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.041585 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.041733 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.041730 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d3daf534-9a77-49c6-964f-d402c5d5a2ac" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.041784 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.041859 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "d3daf534-9a77-49c6-964f-d402c5d5a2ac" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.042304 8018 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.043646 master-0 kubenswrapper[8018]: I0217 15:15:33.042332 8018 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.044903 master-0 kubenswrapper[8018]: I0217 15:15:33.044854 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d3daf534-9a77-49c6-964f-d402c5d5a2ac" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:15:33.143593 master-0 kubenswrapper[8018]: I0217 15:15:33.143518 8018 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.575654 master-0 kubenswrapper[8018]: I0217 15:15:33.575613 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:15:33.578424 master-0 kubenswrapper[8018]: I0217 15:15:33.578388 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"0f85b3342f5b9ee3681b487c6f9af1503246e3aa95e4fcb3fbc34dc5c76ae7fa"} Feb 17 15:15:33.578527 master-0 kubenswrapper[8018]: I0217 15:15:33.578443 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"39d90e2b00141a0c491cc3ec8392a600a6a01595195a3aac176f6c4f99d06ad8"} Feb 17 15:15:33.580596 master-0 kubenswrapper[8018]: I0217 15:15:33.580548 8018 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/3.log" Feb 17 15:15:33.582673 master-0 kubenswrapper[8018]: I0217 15:15:33.582639 8018 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6" exitCode=0 Feb 17 15:15:33.582780 master-0 kubenswrapper[8018]: I0217 15:15:33.582705 8018 scope.go:117] "RemoveContainer" containerID="e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b" Feb 17 15:15:33.583024 master-0 kubenswrapper[8018]: I0217 15:15:33.582793 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 17 15:15:33.587579 master-0 kubenswrapper[8018]: I0217 15:15:33.587535 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"d3daf534-9a77-49c6-964f-d402c5d5a2ac","Type":"ContainerDied","Data":"82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b"} Feb 17 15:15:33.587579 master-0 kubenswrapper[8018]: I0217 15:15:33.587570 8018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b" Feb 17 15:15:33.587734 master-0 kubenswrapper[8018]: I0217 15:15:33.587640 8018 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:33.602112 master-0 kubenswrapper[8018]: I0217 15:15:33.602040 8018 scope.go:117] "RemoveContainer" containerID="2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6" Feb 17 15:15:33.636091 master-0 kubenswrapper[8018]: I0217 15:15:33.636055 8018 scope.go:117] "RemoveContainer" containerID="127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a" Feb 17 15:15:33.660282 master-0 kubenswrapper[8018]: I0217 15:15:33.660178 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.660282 master-0 kubenswrapper[8018]: I0217 15:15:33.660246 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.660282 master-0 kubenswrapper[8018]: I0217 15:15:33.660285 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.660661 master-0 kubenswrapper[8018]: I0217 15:15:33.660309 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.660661 master-0 kubenswrapper[8018]: I0217 15:15:33.660329 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.662490 master-0 kubenswrapper[8018]: I0217 15:15:33.662290 8018 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 17 15:15:33.662625 master-0 kubenswrapper[8018]: I0217 15:15:33.662551 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs" (OuterVolumeSpecName: "logs") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662625 master-0 kubenswrapper[8018]: I0217 15:15:33.662576 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets" (OuterVolumeSpecName: "secrets") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662625 master-0 kubenswrapper[8018]: I0217 15:15:33.662591 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662625 master-0 kubenswrapper[8018]: I0217 15:15:33.662605 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config" (OuterVolumeSpecName: "config") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662625 master-0 kubenswrapper[8018]: I0217 15:15:33.662619 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662929 master-0 kubenswrapper[8018]: I0217 15:15:33.662636 8018 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:33.662929 master-0 kubenswrapper[8018]: I0217 15:15:33.662792 8018 scope.go:117] "RemoveContainer" containerID="e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b" Feb 17 15:15:33.663260 master-0 kubenswrapper[8018]: E0217 15:15:33.663204 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b\": container with ID starting with e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b not found: ID does not exist" containerID="e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b" Feb 17 15:15:33.663338 master-0 kubenswrapper[8018]: I0217 15:15:33.663251 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b"} err="failed to get container status \"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b\": rpc error: code = NotFound desc = could not find container \"e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b\": container with ID starting with e54ca6ceeabac12699eb8a3fc41f19416c7ec8d207ac963a337daa3c35a8bc0b not found: ID does not exist" Feb 17 15:15:33.663338 master-0 kubenswrapper[8018]: I0217 15:15:33.663274 8018 scope.go:117] "RemoveContainer" containerID="2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6" Feb 17 15:15:33.663516 master-0 kubenswrapper[8018]: E0217 15:15:33.663493 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6\": container with ID starting with 2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6 not found: ID does not exist" containerID="2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6" Feb 17 15:15:33.663601 master-0 kubenswrapper[8018]: I0217 15:15:33.663517 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6"} err="failed to get container status \"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6\": rpc error: code = NotFound desc = could not find container \"2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6\": container with ID starting with 2609f5414599cc846c5bc59d12f88634dafa03f2f1a0b4805e5779131227e7b6 not found: ID does not exist" Feb 17 15:15:33.663601 master-0 kubenswrapper[8018]: I0217 15:15:33.663531 8018 scope.go:117] "RemoveContainer" containerID="127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a" Feb 17 15:15:33.668149 master-0 kubenswrapper[8018]: E0217 15:15:33.668083 8018 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a\": container with ID starting with 127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a not found: ID does not exist" containerID="127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a" Feb 17 15:15:33.668149 master-0 kubenswrapper[8018]: I0217 15:15:33.668110 8018 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a"} err="failed to get container status \"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a\": rpc error: code = NotFound desc = could not find container \"127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a\": container with ID starting with 127e7d6cc6eb018b1d6cae8de4b39737caa9da91bed2d8e85c54fc82de9aac1a not found: ID does not exist" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763304 8018 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763336 8018 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763348 8018 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763359 8018 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763380 8018 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:33.763534 master-0 kubenswrapper[8018]: I0217 15:15:33.763395 8018 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:34.595608 master-0 kubenswrapper[8018]: I0217 15:15:34.595493 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"68a438a4e14f80804f842c0c44dfda76c0251a3c52afe081bbd14694a703898a"} Feb 17 15:15:34.595608 master-0 kubenswrapper[8018]: I0217 15:15:34.595533 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"0a6f90db7355282c99c29dbf0363e0633a9d55c0e8f232d859147cef7d241a54"} Feb 17 15:15:34.595608 master-0 kubenswrapper[8018]: I0217 15:15:34.595544 8018 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a"} Feb 17 15:15:34.596079 master-0 kubenswrapper[8018]: I0217 15:15:34.595651 8018 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:35.447830 master-0 kubenswrapper[8018]: I0217 15:15:35.447775 8018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" path="/var/lib/kubelet/pods/5d1e91e5a1fed5cf7076a92d2830d36f/volumes" Feb 17 15:15:35.448293 master-0 kubenswrapper[8018]: I0217 15:15:35.448265 8018 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 17 15:15:37.620220 master-0 kubenswrapper[8018]: I0217 15:15:37.620170 8018 generic.go:334] "Generic (PLEG): container finished" podID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerID="860736c555e36eb357d7747028619f7c30730d9978a45e3a5c0a43cdd4bd9ba8" exitCode=0 Feb 17 15:15:37.957991 master-0 kubenswrapper[8018]: I0217 15:15:37.957809 8018 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:15:37.958119 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 17 15:15:37.979889 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 17 15:15:37.980405 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 17 15:15:37.985417 master-0 systemd[1]: kubelet.service: Consumed 1min 52.369s CPU time. Feb 17 15:15:38.014707 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:15:38.167538 master-0 kubenswrapper[26425]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:15:38.168693 master-0 kubenswrapper[26425]: I0217 15:15:38.167610 26425 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:15:38.174238 master-0 kubenswrapper[26425]: W0217 15:15:38.174174 26425 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:15:38.174391 master-0 kubenswrapper[26425]: W0217 15:15:38.174307 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:15:38.174391 master-0 kubenswrapper[26425]: W0217 15:15:38.174321 26425 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:15:38.174391 master-0 kubenswrapper[26425]: W0217 15:15:38.174330 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:15:38.174391 master-0 kubenswrapper[26425]: W0217 15:15:38.174340 26425 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:15:38.174391 master-0 kubenswrapper[26425]: W0217 15:15:38.174350 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174402 26425 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174414 26425 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174425 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174434 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174443 26425 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174452 26425 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174608 26425 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174618 26425 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174627 26425 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174636 26425 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174645 26425 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174654 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174663 26425 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174673 26425 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174682 26425 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174691 26425 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174699 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174708 26425 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174717 26425 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:15:38.174822 master-0 kubenswrapper[26425]: W0217 15:15:38.174725 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174734 26425 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174743 26425 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174752 26425 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174764 26425 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174790 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174800 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174810 26425 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174852 26425 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174866 26425 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174878 26425 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174889 26425 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174898 26425 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174908 26425 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174919 26425 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174932 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174942 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174953 26425 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174963 26425 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:15:38.176074 master-0 kubenswrapper[26425]: W0217 15:15:38.174973 26425 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.174983 26425 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.174992 26425 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175002 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175015 26425 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175026 26425 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175036 26425 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175045 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175055 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175064 26425 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175074 26425 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175086 26425 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175097 26425 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175108 26425 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175117 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175128 26425 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175140 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175149 26425 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175161 26425 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:15:38.177131 master-0 kubenswrapper[26425]: W0217 15:15:38.175169 26425 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175178 26425 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175187 26425 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175197 26425 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175206 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175215 26425 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175224 26425 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175232 26425 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: W0217 15:15:38.175241 26425 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175413 26425 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175432 26425 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175447 26425 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175487 26425 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175500 26425 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175511 26425 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175525 26425 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175537 26425 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175548 26425 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175558 26425 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175571 26425 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175583 26425 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175593 26425 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:15:38.178213 master-0 kubenswrapper[26425]: I0217 15:15:38.175603 26425 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175613 26425 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175624 26425 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175819 26425 flags.go:64] FLAG: --cloud-config="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175834 26425 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175845 26425 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175859 26425 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175869 26425 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175881 26425 flags.go:64] FLAG: --config-dir="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175891 26425 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175902 26425 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175915 26425 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175925 26425 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175935 26425 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175947 26425 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175957 26425 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175968 26425 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175978 26425 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.175989 26425 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176000 26425 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176013 26425 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176023 26425 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176034 26425 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176044 26425 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176055 26425 flags.go:64] FLAG: --enable-server="true" Feb 17 15:15:38.179669 master-0 kubenswrapper[26425]: I0217 15:15:38.176065 26425 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176078 26425 flags.go:64] FLAG: --event-burst="100" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176089 26425 flags.go:64] FLAG: --event-qps="50" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176102 26425 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176114 26425 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176124 26425 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176137 26425 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176147 26425 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176158 26425 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176169 26425 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176180 26425 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176191 26425 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176201 26425 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176211 26425 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176221 26425 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176232 26425 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176242 26425 flags.go:64] FLAG: --feature-gates="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176254 26425 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176265 26425 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176275 26425 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176286 26425 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176297 26425 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176307 26425 flags.go:64] FLAG: --help="false" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176317 26425 flags.go:64] FLAG: --hostname-override="" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176327 26425 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176338 26425 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:15:38.181170 master-0 kubenswrapper[26425]: I0217 15:15:38.176349 26425 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176360 26425 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176371 26425 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176382 26425 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176392 26425 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176403 26425 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176413 26425 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176423 26425 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176434 26425 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176445 26425 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176480 26425 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176491 26425 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176502 26425 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176512 26425 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176522 26425 flags.go:64] FLAG: --lock-file="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176542 26425 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176553 26425 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176564 26425 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176597 26425 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176607 26425 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176618 26425 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176629 26425 flags.go:64] FLAG: --logging-format="text" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176639 26425 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176650 26425 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176660 26425 flags.go:64] FLAG: --manifest-url="" Feb 17 15:15:38.182749 master-0 kubenswrapper[26425]: I0217 15:15:38.176671 26425 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176684 26425 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176695 26425 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176707 26425 flags.go:64] FLAG: --max-pods="110" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176718 26425 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176729 26425 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176740 26425 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176750 26425 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176761 26425 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176771 26425 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176782 26425 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176805 26425 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176815 26425 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176826 26425 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176836 26425 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176847 26425 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176985 26425 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.176998 26425 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177009 26425 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177020 26425 flags.go:64] FLAG: --port="10250" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177031 26425 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177042 26425 flags.go:64] FLAG: --provider-id="" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177052 26425 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177067 26425 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:15:38.184110 master-0 kubenswrapper[26425]: I0217 15:15:38.177077 26425 flags.go:64] FLAG: --register-node="true" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177087 26425 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177098 26425 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177114 26425 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177125 26425 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177137 26425 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177148 26425 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177161 26425 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177172 26425 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177182 26425 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177193 26425 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177203 26425 flags.go:64] FLAG: --runonce="false" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177214 26425 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177225 26425 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177236 26425 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177246 26425 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177256 26425 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177268 26425 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177279 26425 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177290 26425 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177300 26425 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177335 26425 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177348 26425 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177404 26425 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177416 26425 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:15:38.185446 master-0 kubenswrapper[26425]: I0217 15:15:38.177427 26425 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177437 26425 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177477 26425 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177489 26425 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177499 26425 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177513 26425 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177526 26425 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177536 26425 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177547 26425 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177558 26425 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177568 26425 flags.go:64] FLAG: --v="2" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177581 26425 flags.go:64] FLAG: --version="false" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177594 26425 flags.go:64] FLAG: --vmodule="" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177606 26425 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: I0217 15:15:38.177617 26425 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177838 26425 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177850 26425 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177860 26425 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177871 26425 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177881 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177890 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177899 26425 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177907 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:15:38.186817 master-0 kubenswrapper[26425]: W0217 15:15:38.177916 26425 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177926 26425 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177936 26425 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177945 26425 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177958 26425 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177967 26425 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177976 26425 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177984 26425 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.177993 26425 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178002 26425 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178011 26425 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178116 26425 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178130 26425 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178142 26425 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178154 26425 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178170 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178180 26425 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178191 26425 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178201 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178210 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:15:38.188064 master-0 kubenswrapper[26425]: W0217 15:15:38.178221 26425 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178231 26425 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178240 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178250 26425 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178259 26425 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178267 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178277 26425 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178288 26425 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178297 26425 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178308 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178317 26425 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178329 26425 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178340 26425 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178350 26425 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178359 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178368 26425 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178381 26425 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178390 26425 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178399 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178407 26425 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:15:38.189250 master-0 kubenswrapper[26425]: W0217 15:15:38.178418 26425 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178433 26425 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178446 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178489 26425 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178502 26425 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178517 26425 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178533 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178552 26425 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178563 26425 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178573 26425 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178583 26425 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178592 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178602 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178611 26425 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178621 26425 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178630 26425 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178638 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178647 26425 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178656 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:15:38.190387 master-0 kubenswrapper[26425]: W0217 15:15:38.178665 26425 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.178673 26425 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.178683 26425 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.178691 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.178700 26425 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: I0217 15:15:38.178728 26425 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: I0217 15:15:38.186928 26425 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: I0217 15:15:38.186966 26425 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187097 26425 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187112 26425 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187125 26425 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187137 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187147 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187156 26425 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:15:38.191491 master-0 kubenswrapper[26425]: W0217 15:15:38.187166 26425 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187175 26425 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187188 26425 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187198 26425 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187208 26425 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187220 26425 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187231 26425 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187242 26425 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187251 26425 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187260 26425 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187269 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187278 26425 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187287 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187296 26425 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187304 26425 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187313 26425 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187322 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187334 26425 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187345 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:15:38.192379 master-0 kubenswrapper[26425]: W0217 15:15:38.187355 26425 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187365 26425 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187375 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187384 26425 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187394 26425 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187404 26425 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187416 26425 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187425 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187434 26425 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187446 26425 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187488 26425 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187501 26425 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187513 26425 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187523 26425 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187532 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187541 26425 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187549 26425 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187558 26425 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187568 26425 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187577 26425 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:15:38.193685 master-0 kubenswrapper[26425]: W0217 15:15:38.187586 26425 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187595 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187604 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187612 26425 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187621 26425 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187630 26425 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187640 26425 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187648 26425 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187657 26425 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187666 26425 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187675 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187686 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187695 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187703 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187712 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187721 26425 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187730 26425 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187740 26425 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187751 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187760 26425 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:15:38.194861 master-0 kubenswrapper[26425]: W0217 15:15:38.187769 26425 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187778 26425 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187786 26425 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187795 26425 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187804 26425 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187814 26425 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.187823 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: I0217 15:15:38.187892 26425 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188139 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188154 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188168 26425 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188179 26425 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188189 26425 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188201 26425 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188213 26425 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:15:38.196107 master-0 kubenswrapper[26425]: W0217 15:15:38.188224 26425 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188234 26425 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188243 26425 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188254 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188263 26425 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188275 26425 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188285 26425 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188294 26425 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188304 26425 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188313 26425 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188323 26425 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188334 26425 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188343 26425 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188351 26425 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188360 26425 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188369 26425 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188379 26425 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188388 26425 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188398 26425 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:15:38.197081 master-0 kubenswrapper[26425]: W0217 15:15:38.188407 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188418 26425 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188430 26425 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188440 26425 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188450 26425 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188491 26425 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188507 26425 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188521 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188531 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188540 26425 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188549 26425 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188558 26425 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188567 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188577 26425 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188587 26425 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188598 26425 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188608 26425 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188617 26425 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188626 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:15:38.198594 master-0 kubenswrapper[26425]: W0217 15:15:38.188636 26425 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188645 26425 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188653 26425 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188662 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188671 26425 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188680 26425 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188689 26425 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188700 26425 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188712 26425 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188724 26425 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188735 26425 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188747 26425 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188759 26425 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188771 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188782 26425 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188792 26425 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188801 26425 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188810 26425 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188819 26425 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188828 26425 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188837 26425 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:15:38.199696 master-0 kubenswrapper[26425]: W0217 15:15:38.188846 26425 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: W0217 15:15:38.188854 26425 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: W0217 15:15:38.188863 26425 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: W0217 15:15:38.188877 26425 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: W0217 15:15:38.188888 26425 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: W0217 15:15:38.188898 26425 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.188911 26425 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.189218 26425 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.192308 26425 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.192438 26425 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.192925 26425 server.go:997] "Starting client certificate rotation" Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.192946 26425 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:15:38.201426 master-0 kubenswrapper[26425]: I0217 15:15:38.193254 26425 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 11:35:38.003531354 +0000 UTC Feb 17 15:15:38.202235 master-0 kubenswrapper[26425]: I0217 15:15:38.193435 26425 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h19m59.810101418s for next certificate rotation Feb 17 15:15:38.202235 master-0 kubenswrapper[26425]: I0217 15:15:38.194219 26425 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:15:38.202235 master-0 kubenswrapper[26425]: I0217 15:15:38.196691 26425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:15:38.202413 master-0 kubenswrapper[26425]: I0217 15:15:38.201558 26425 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:15:38.211107 master-0 kubenswrapper[26425]: I0217 15:15:38.211057 26425 log.go:25] "Validated CRI v1 image API" Feb 17 15:15:38.213064 master-0 kubenswrapper[26425]: I0217 15:15:38.213019 26425 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:15:38.233860 master-0 kubenswrapper[26425]: I0217 15:15:38.233779 26425 fs.go:135] Filesystem UUIDs: map[4e612f26-a2b1-4cb3-97c9-965b3561529c:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 17 15:15:38.235151 master-0 kubenswrapper[26425]: I0217 15:15:38.233839 26425 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/026610117c01997654c9e952b5a30927858c6efbfd458d75332f24ab296e1898/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/026610117c01997654c9e952b5a30927858c6efbfd458d75332f24ab296e1898/userdata/shm major:0 minor:1235 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad/userdata/shm major:0 minor:959 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0760e00b932363042782ba956e380d806e3d87e24d2f82f4acd8b411bacdc365/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0760e00b932363042782ba956e380d806e3d87e24d2f82f4acd8b411bacdc365/userdata/shm major:0 minor:1125 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d/userdata/shm major:0 minor:928 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/086d9bb4b9a7ac8b6af3cbff40a452b0f16d3de1089172ce89af2a258294dacf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/086d9bb4b9a7ac8b6af3cbff40a452b0f16d3de1089172ce89af2a258294dacf/userdata/shm major:0 minor:539 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656/userdata/shm major:0 minor:474 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16817c879758d5dca93902f6417f76df9adc387ff018e7fa4b42bb730dfe7417/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16817c879758d5dca93902f6417f76df9adc387ff018e7fa4b42bb730dfe7417/userdata/shm major:0 minor:824 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d/userdata/shm major:0 minor:901 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c9e969e18b1411cff6ba15e9601c6a1a570693b9fa41b729154f36c3d4cfc86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c9e969e18b1411cff6ba15e9601c6a1a570693b9fa41b729154f36c3d4cfc86/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/298673e77b46ac4f7d905ff32814664148ad0db661cddcaaee10cf189d3684c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/298673e77b46ac4f7d905ff32814664148ad0db661cddcaaee10cf189d3684c5/userdata/shm major:0 minor:499 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f38747bdec24188d4ffe8cfb159d9a08ab099ae4fe10c6fb530c6bc6745fe0f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f38747bdec24188d4ffe8cfb159d9a08ab099ae4fe10c6fb530c6bc6745fe0f/userdata/shm major:0 minor:1238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30157c99e347dac95082456d5e90aaa231761068887f6a65d5089463dbf44226/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30157c99e347dac95082456d5e90aaa231761068887f6a65d5089463dbf44226/userdata/shm major:0 minor:1231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845/userdata/shm major:0 minor:49 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3de92b39f5eed6fb2072489b003ac88b141cc4450863a8a84bd84754c9097e8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3de92b39f5eed6fb2072489b003ac88b141cc4450863a8a84bd84754c9097e8a/userdata/shm major:0 minor:420 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ae9c7ad8143a0b1cfbbc04f9419df3b288d0c3ef1448b00390641786802dac4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ae9c7ad8143a0b1cfbbc04f9419df3b288d0c3ef1448b00390641786802dac4/userdata/shm major:0 minor:505 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/50580897aab729847bb16b1be89c08ccaf45ebad432b32e9d2c48074ace08db5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/50580897aab729847bb16b1be89c08ccaf45ebad432b32e9d2c48074ace08db5/userdata/shm major:0 minor:771 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f/userdata/shm major:0 minor:966 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/57edd3b523cd1b85d285ca94528fb2e1279d3c9bd1b74461a1727888cc91ac92/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/57edd3b523cd1b85d285ca94528fb2e1279d3c9bd1b74461a1727888cc91ac92/userdata/shm major:0 minor:503 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e/userdata/shm major:0 minor:502 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5/userdata/shm major:0 minor:1349 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63333766efa7717806a0ceafcfe5e910596ee1f9959715b67862349cd0661743/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63333766efa7717806a0ceafcfe5e910596ee1f9959715b67862349cd0661743/userdata/shm major:0 minor:1047 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6968fe4893506f2c7eff240b0f99304a06f7947186a1a85995eef13747cf455c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6968fe4893506f2c7eff240b0f99304a06f7947186a1a85995eef13747cf455c/userdata/shm major:0 minor:495 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/722d47350d1c81810576142df11eff4e518dcde59f93678f428ad5eb7002bb4a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/722d47350d1c81810576142df11eff4e518dcde59f93678f428ad5eb7002bb4a/userdata/shm major:0 minor:521 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm major:0 minor:307 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79cd9922eddeda66f86396279d7c2d92bdfdde5d55f7ab9b86712ce128d7d382/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79cd9922eddeda66f86396279d7c2d92bdfdde5d55f7ab9b86712ce128d7d382/userdata/shm major:0 minor:1103 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de/userdata/shm major:0 minor:932 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9/userdata/shm major:0 minor:1053 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09/userdata/shm major:0 minor:1300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82a4950a547d0a59e18c269c45642d4e42307ae5014626ff584ece03ffa671c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82a4950a547d0a59e18c269c45642d4e42307ae5014626ff584ece03ffa671c2/userdata/shm major:0 minor:599 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/88069f4ccbdf201c4be62b11d0e703527a7a79f09f40906dc3a787d78261c8ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/88069f4ccbdf201c4be62b11d0e703527a7a79f09f40906dc3a787d78261c8ef/userdata/shm major:0 minor:500 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/906174604cb39234c29ce4879ec0f4d93014bdd017a01d3e85d6c19518222596/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/906174604cb39234c29ce4879ec0f4d93014bdd017a01d3e85d6c19518222596/userdata/shm major:0 minor:507 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9/userdata/shm major:0 minor:1051 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a/userdata/shm major:0 minor:1077 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447/userdata/shm major:0 minor:1024 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a3a77a00a966d03623fbb6190f7a54610fa74ee604fa29802c44b60a21f260b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a3a77a00a966d03623fbb6190f7a54610fa74ee604fa29802c44b60a21f260b9/userdata/shm major:0 minor:509 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a/userdata/shm major:0 minor:1203 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a592584f1d491ed515603e4859ea07fdb301bfabbc222443eff56b510fc57717/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a592584f1d491ed515603e4859ea07fdb301bfabbc222443eff56b510fc57717/userdata/shm major:0 minor:1163 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a681cbc579a95de476c193412db5500c7b6a259702d2ab059c0ee35c97e7da06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a681cbc579a95de476c193412db5500c7b6a259702d2ab059c0ee35c97e7da06/userdata/shm major:0 minor:496 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ac3405a44e64442f5f84de1f2fe4affb9bf6727f46c3097b260717adce5a4719/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ac3405a44e64442f5f84de1f2fe4affb9bf6727f46c3097b260717adce5a4719/userdata/shm major:0 minor:345 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45/userdata/shm major:0 minor:1208 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974/userdata/shm major:0 minor:1134 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860/userdata/shm major:0 minor:408 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d/userdata/shm major:0 minor:598 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a/userdata/shm major:0 minor:1343 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm major:0 minor:290 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f/userdata/shm major:0 minor:411 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02/userdata/shm major:0 minor:365 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be/userdata/shm major:0 minor:930 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2/userdata/shm major:0 minor:990 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c15c55254b60eef4e6f082f6ebb85ff7cc6e3f7a7f4e7b7ce280e5a616be4326/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c15c55254b60eef4e6f082f6ebb85ff7cc6e3f7a7f4e7b7ce280e5a616be4326/userdata/shm major:0 minor:724 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff/userdata/shm major:0 minor:1266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e/userdata/shm major:0 minor:1270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e/userdata/shm major:0 minor:875 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c/userdata/shm major:0 minor:917 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339/userdata/shm major:0 minor:642 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a/userdata/shm major:0 minor:644 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k:{mountpoint:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:484 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf:{mountpoint:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/srv-cert major:0 minor:470 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf:{mountpoint:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~projected/kube-api-access-dmp42:{mountpoint:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~projected/kube-api-access-dmp42 major:0 minor:594 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/encryption-config major:0 minor:591 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/etcd-client major:0 minor:592 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/serving-cert major:0 minor:593 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/129dba1e-73df-4ea4-96c0-3eba78d568ba/volumes/kubernetes.io~projected/kube-api-access-rbmb9:{mountpoint:/var/lib/kubelet/pods/129dba1e-73df-4ea4-96c0-3eba78d568ba/volumes/kubernetes.io~projected/kube-api-access-rbmb9 major:0 minor:410 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~projected/kube-api-access-7lw7x:{mountpoint:/var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~projected/kube-api-access-7lw7x major:0 minor:958 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:945 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg:{mountpoint:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:486 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~projected/kube-api-access-d2tcz:{mountpoint:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~projected/kube-api-access-d2tcz major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/encryption-config major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/etcd-client major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/serving-cert major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~projected/kube-api-access-hq2mb:{mountpoint:/var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~projected/kube-api-access-hq2mb major:0 minor:989 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:984 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874:{mountpoint:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874 major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7:{mountpoint:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7 major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/srv-cert major:0 minor:492 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4:{mountpoint:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df:{mountpoint:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~projected/kube-api-access-xrg27:{mountpoint:/var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~projected/kube-api-access-xrg27 major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~secret/serving-cert major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:729 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/tmp major:0 minor:730 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~projected/kube-api-access-pnhjw:{mountpoint:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~projected/kube-api-access-pnhjw major:0 minor:684 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb:{mountpoint:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:67 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/ca-certs major:0 minor:590 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/kube-api-access-8g48f:{mountpoint:/var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/kube-api-access-8g48f major:0 minor:584 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52b28595-f0fc-49e2-9c95-43e5f1eb003f/volumes/kubernetes.io~projected/kube-api-access-klfm5:{mountpoint:/var/lib/kubelet/pods/52b28595-f0fc-49e2-9c95-43e5f1eb003f/volumes/kubernetes.io~projected/kube-api-access-klfm5 major:0 minor:394 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx:{mountpoint:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~projected/kube-api-access major:0 minor:881 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~secret/serving-cert major:0 minor:876 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm:{mountpoint:/var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~projected/kube-api-access-qf69t:{mountpoint:/var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~projected/kube-api-access-qf69t major:0 minor:975 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:1157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh:{mountpoint:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/ca-certs major:0 minor:523 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/kube-api-access-4lwz4:{mountpoint:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/kube-api-access-4lwz4 major:0 minor:524 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~projected/kube-api-access-cr7lv:{mountpoint:/var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~projected/kube-api-access-cr7lv major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:1120 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz:{mountpoint:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~projected/kube-api-access-k8ckv:{mountpoint:/var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~projected/kube-api-access-k8ckv major:0 minor:791 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~secret/cert major:0 minor:1265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e43034-56d0-4fb2-8886-deb00b625686/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/70e43034-56d0-4fb2-8886-deb00b625686/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/727f20b6-19c7-45eb-a803-6898ecaeffd0/volumes/kubernetes.io~projected/kube-api-access-bpwhf:{mountpoint:/var/lib/kubelet/pods/727f20b6-19c7-45eb-a803-6898ecaeffd0/volumes/kubernetes.io~projected/kube-api-access-bpwhf major:0 minor:331 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~projected/kube-api-access-dzrmf:{mountpoint:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~projected/kube-api-access-dzrmf major:0 minor:916 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cert major:0 minor:914 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:915 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~projected/kube-api-access-wjb95:{mountpoint:/var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~projected/kube-api-access-wjb95 major:0 minor:1102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~projected/kube-api-access-jhm88:{mountpoint:/var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~projected/kube-api-access-jhm88 major:0 minor:1087 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:1205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~projected/kube-api-access-8cx29:{mountpoint:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~projected/kube-api-access-8cx29 major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1072 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~projected/kube-api-access-f54vt:{mountpoint:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~projected/kube-api-access-f54vt major:0 minor:1299 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1297 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86:{mountpoint:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86 major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4:{mountpoint:/var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/833c8661-28ca-463a-ac61-6edb961056e3/volumes/kubernetes.io~projected/kube-api-access-2ghlk:{mountpoint:/var/lib/kubelet/pods/833c8661-28ca-463a-ac61-6edb961056e3/volumes/kubernetes.io~projected/kube-api-access-2ghlk major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~projected/kube-api-access-sj92w:{mountpoint:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~projected/kube-api-access-sj92w major:0 minor:1269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/83 Feb 17 15:15:38.235814 master-0 kubenswrapper[26425]: 79aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~projected/kube-api-access-lnnxm:{mountpoint:/var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~projected/kube-api-access-lnnxm major:0 minor:926 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~secret/serving-cert major:0 minor:924 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~projected/kube-api-access-nwptc:{mountpoint:/var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~projected/kube-api-access-nwptc major:0 minor:676 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94f5fac8-582e-44a3-8dd5-c4e6e80829ef/volumes/kubernetes.io~projected/kube-api-access-cpmdw:{mountpoint:/var/lib/kubelet/pods/94f5fac8-582e-44a3-8dd5-c4e6e80829ef/volumes/kubernetes.io~projected/kube-api-access-cpmdw major:0 minor:629 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~projected/kube-api-access-tk6jm:{mountpoint:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~projected/kube-api-access-tk6jm major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/certs major:0 minor:1073 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:905 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~projected/kube-api-access-4rcj2:{mountpoint:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~projected/kube-api-access-4rcj2 major:0 minor:1228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~projected/kube-api-access-8q8jf:{mountpoint:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~projected/kube-api-access-8q8jf major:0 minor:1044 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/default-certificate major:0 minor:1036 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1043 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/stats-auth major:0 minor:1041 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3b6a099-f52a-428a-af09-d1842ce66891/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a3b6a099-f52a-428a-af09-d1842ce66891/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1335 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa267e55-eef2-447f-b2ff-57c1ec2930be/volumes/kubernetes.io~projected/kube-api-access-nx8s7:{mountpoint:/var/lib/kubelet/pods/aa267e55-eef2-447f-b2ff-57c1ec2930be/volumes/kubernetes.io~projected/kube-api-access-nx8s7 major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~projected/kube-api-access-9t5jv:{mountpoint:/var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~projected/kube-api-access-9t5jv major:0 minor:925 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:923 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4:{mountpoint:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4 major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~projected/kube-api-access-gswxb:{mountpoint:/var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~projected/kube-api-access-gswxb major:0 minor:422 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~secret/signing-key major:0 minor:421 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~projected/kube-api-access-27gfx:{mountpoint:/var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~projected/kube-api-access-27gfx major:0 minor:900 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:895 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~projected/kube-api-access-l2d4n:{mountpoint:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~projected/kube-api-access-l2d4n major:0 minor:952 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:946 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/webhook-cert major:0 minor:947 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~projected/kube-api-access-7pn82:{mountpoint:/var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~projected/kube-api-access-7pn82 major:0 minor:1021 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1017 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg:{mountpoint:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c33efa80-fbeb-438a-86e3-d22d7c12d3e9/volumes/kubernetes.io~projected/kube-api-access-zr2dv:{mountpoint:/var/lib/kubelet/pods/c33efa80-fbeb-438a-86e3-d22d7c12d3e9/volumes/kubernetes.io~projected/kube-api-access-zr2dv major:0 minor:46 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~projected/kube-api-access-j5w6f:{mountpoint:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~projected/kube-api-access-j5w6f major:0 minor:1229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1101 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92:{mountpoint:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc:{mountpoint:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~projected/kube-api-access-t9wh2:{mountpoint:/var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~projected/kube-api-access-t9wh2 major:0 minor:919 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~secret/cert major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~projected/kube-api-access-qzrph:{mountpoint:/var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~projected/kube-api-access-qzrph major:0 minor:912 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:1115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~projected/kube-api-access-9fj8w:{mountpoint:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~projected/kube-api-access-9fj8w major:0 minor:1230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1096 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d075439c-721d-432b-b4f9-9f078132bf92/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/d075439c-721d-432b-b4f9-9f078132bf92/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1040 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d973c9bc-8097-489c-9b8b-70b775177c41/volumes/kubernetes.io~projected/kube-api-access-gkb9r:{mountpoint:/var/lib/kubelet/pods/d973c9bc-8097-489c-9b8b-70b775177c41/volumes/kubernetes.io~projected/kube-api-access-gkb9r major:0 minor:1046 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~projected/kube-api-access-xpsd7:{mountpoint:/var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~projected/kube-api-access-xpsd7 major:0 minor:927 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~secret/proxy-tls major:0 minor:922 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~projected/kube-api-access-spcf4:{mountpoint:/var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~projected/kube-api-access-spcf4 major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~secret/serving-cert major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr:{mountpoint:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs:{mountpoint:/var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp:{mountpoint:/var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc216ba1-144a-4cc8-93db-85ab558a166a/volumes/kubernetes.io~projected/kube-api-access-7gwpz:{mountpoint:/var/lib/kubelet/pods/fc216ba1-144a-4cc8-93db-85ab558a166a/volumes/kubernetes.io~projected/kube-api-access-7gwpz major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc:{mountpoint:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~secret/metrics-tls major:0 minor:493 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd:{mountpoint:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~secret/metrics-certs major:0 minor:487 fsType:tmpfs blockSize:0} overlay_0-1000:{mountpoint:/var/lib/containers/storage/overlay/f8387be54a6db0355582e2ae562ef562b120f4aaeb08fd178e004b64c67974fa/merged major:0 minor:1000 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/4343380da6e730e39a3c29ecf018877d6169ea2a542c03de49b91bfb831f6298/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/9c4bea60462c40a59461752922cdaedddbfc63e777b3e02c0cf8cb7c451a55c1/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/3c7388565590a40c584b78d104515d741f26a07e59b36b19d0bd82c63a72123c/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/5dc7bb1ce723183a4be5edf38297abde63256c60ebd13d2066ebe8fc7993aa57/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/efb192a36f918257496e05552fb0b8a357c17fc986fbdb2ece8d53eedf0f2c68/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/67d8ab462c12b88de71c55d343a380acffc8a5929cd74cad8dca884d8fa220a9/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/596581b29c0517cfb9d8500c27267a9137e214e6d4c9d5ead12f9add1fd6189c/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/771c5e57d039df70644c5a8eb534408bdbc533b7ac912a88fecf3cfb6d1b692a/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-1057:{mountpoint:/var/lib/containers/storage/overlay/69bcb9788c6e50ff8cd14cea3a529f00b89d4c6a010d626e04657d4d1c139d78/merged major:0 minor:1057 fsType:overlay blockSize:0} overlay_0-1059:{mountpoint:/var/lib/containers/storage/overlay/b4797b0d61ed98f8f447112b018e16a30d711a6023174174fa215846c15cea46/merged major:0 minor:1059 fsType:overlay blockSize:0} overlay_0-1061:{mountpoint:/var/lib/containers/storage/overlay/f7c5274ce387404b1c74084813022567d596c5d381ca15e9bb815609b6d04d13/merged major:0 minor:1061 fsType:overlay blockSize:0} overlay_0-1063:{mountpoint:/var/lib/containers/storage/overlay/b2a8c7fe456be2bf2c26536a47982cc7491ffa216f30cc180b4057353de80d97/merged major:0 minor:1063 fsType:overlay blockSize:0} overlay_0-1070:{mountpoint:/var/lib/containers/storage/overlay/b5d998727bf7bf658629f0e5c53626778a6a162e13fd2589acc106607a74bbbc/merged major:0 minor:1070 fsType:overlay blockSize:0} overlay_0-1083:{mountpoint:/var/lib/containers/storage/overlay/543d97000be065b117b5505f42285533b21adc71741bfc3218475d557112a700/merged major:0 minor:1083 fsType:overlay blockSize:0} overlay_0-1085:{mountpoint:/var/lib/containers/storage/overlay/190626f11a77dc51c1e6f7c174e5e9caef53d302dcf49d280976e52546258ad4/merged major:0 minor:1085 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/7d871c5cf45406be9583577a2f22ef2f090f1a86201c462a9651221f54008bb5/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1105:{mountpoint:/var/lib/containers/storage/overlay/38526b1cac193e98926f97c7efe833b7ac71c93528f48b2b60edf5650185950d/merged major:0 minor:1105 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/7dc297b2b003b332e8e5cceff467e41c484442a715e1ecdc8cffb9293bc816c5/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1109:{mountpoint:/var/lib/containers/storage/overlay/58617f0aef80fa36e8d710b6eecd13a01f219651fa1c91589d0cb7c190a0ab4d/merged major:0 minor:1109 fsType:overlay blockSize:0} overlay_0-1127:{mountpoint:/var/lib/containers/storage/overlay/599ab96d518663b7e873d6e2686a732e8d06b4411d913f250ab794ef14fa2622/merged major:0 minor:1127 fsType:overlay blockSize:0} overlay_0-1136:{mountpoint:/var/lib/containers/storage/overlay/1703f18424cb8cf9a023561feb0c540eabf122c1bbe0b2a51b30768b62594a5f/merged major:0 minor:1136 fsType:overlay blockSize:0} overlay_0-1139:{mountpoint:/var/lib/containers/storage/overlay/bc792e3618733c96288883e60afb483f267d4b8b78d4e568e4047b2bf60d66c1/merged major:0 minor:1139 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/b08c7411e723cb510ca5ade68f28e9c7dd017411c34aa38e5a905f8f5fb97258/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1148:{mountpoint:/var/lib/containers/storage/overlay/48017a724beae35699117eb7f3ea0fa67f86f05c1e667813182b07da4b63d726/merged major:0 minor:1148 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/6f94038febf39a452956cf6d684b14da18abeb7d0994a33650b948c9d3e3c109/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/6578b37cb4b00d51b8ce8c4909990ad096c6dbcd6f7d6d2481205e77a9e0c235/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-1153:{mountpoint:/var/lib/containers/storage/overlay/5d2a84234b164f15336f2bed103d9c8824b52285928f8fe66bb7ae2bb72375fc/merged major:0 minor:1153 fsType:overlay blockSize:0} overlay_0-1161:{mountpoint:/var/lib/containers/storage/overlay/6b11c898401555472d0d836e07e1f49fcdfc2d213f9271af74cd3916afa3322f/merged major:0 minor:1161 fsType:overlay blockSize:0} overlay_0-1168:{mountpoint:/var/lib/containers/storage/overlay/0fe0f23d193754a13e667b9eaa2bc7f0ec0a867d5c85e4b676083d06941978fd/merged major:0 minor:1168 fsType:overlay blockSize:0} overlay_0-1170:{mountpoint:/var/lib/containers/storage/overlay/b7a019803c2b5e3829a6bf69197f3701bd30d1d9ff85160ac00533f41e20736a/merged major:0 minor:1170 fsType:overlay blockSize:0} overlay_0-1172:{mountpoint:/var/lib/containers/storage/overlay/c7dfee3a280afcb2868bd319d9724c6a05a1766afbe7834d8cd2dd035517ee1b/merged major:0 minor:1172 fsType:overlay blockSize:0} overlay_0-1180:{mountpoint:/var/lib/containers/storage/overlay/8f32a7acb731adc7cc10ccce1684a2642ae21973c53642b0b7cfb3624f20da07/merged major:0 minor:1180 fsType:overlay blockSize:0} overlay_0-1189:{mountpoint:/var/lib/containers/storage/overlay/559fa69e236b7aabdc55daf02e96faa111720492023f7a18da2cdeade884b41c/merged major:0 minor:1189 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/fa1a4874cb0fb982ee9d5601bc3d91a97190fbdfc2360a693dc9e83198e58558/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/342792850e43f86e7add79ad6b8a67f76b1568ed9d0e25936262cebc60a66d81/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-1200:{mountpoint:/var/lib/containers/storage/overlay/33ea6dc40dcd0e7990f6a9a3b624a7202a2a0c9b7d06e500ef6f07f724032f21/merged major:0 minor:1200 fsType:overlay blockSize:0} overlay_0-1206:{mountpoint:/var/lib/containers/storage/overlay/542d534885c22613e55ee100ea375bc818f7577c56477a352ae1a03802e2376e/merged major:0 minor:1206 fsType:overlay blockSize:0} overlay_0-1210:{mountpoint:/var/lib/containers/storage/overlay/b1b32e8c3ced416747dcf4839467a8a47bd2998ad9adb7713cf6805dac81ed59/merged major:0 minor:1210 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/9bab2b9ee3df4965e1a83ee8bfd119ee10a18a948b9f22b7bbc784c4b4ade2fa/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-1214:{mountpoint:/var/lib/containers/storage/overlay/dc38080f1c407606e0ebfb0db62f6cb9a47c48a4467b2a0229f598f2c8018ea5/merged major:0 minor:1214 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/1e2e832700d6871ece3b3ae11e5a62ef486c7c8c01675247994af3609f64ccfe/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/902c21b6f4b95df40caf5ae6b08f908e72d4115b580eae3fbfcd4c37dedc0667/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/4026c00efd3f44958e89789080dec338f35e4f7f91356eab032388f48d6f6a6b/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-1233:{mountpoint:/var/lib/containers/storage/overlay/22f3c98fbd6baaf0b4f6eb9988ccbf60303730c33caa70b7725a67431568abe6/merged major:0 minor:1233 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/6a671383bdcd97e987282dd4dfac2fb3d9d8c0a525f9f98da34f60b2f053e942/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/6c9f8b5f84ab37a10349bbe6f79ec55056e74c2d2dce9578421be9fe2ff7069c/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-1242:{mountpoint:/var/lib/containers/storage/overlay/3ff8551677bb35f76af8e672da13c09ff4a056869c841d6d203561cf7340384a/merged major:0 minor:1242 fsType:overlay blockSize:0} overlay_0-1244:{mountpoint:/var/lib/containers/storage/overlay/b92b524bbce99eede3513f73a3f17f60c36072a22fe92a4d6232316fcdd21045/merged major:0 minor:1244 fsType:overlay blockSize:0} overlay_0-1253:{mountpoint:/var/lib/containers/storage/overlay/6caa409b27b1804886c981e15864092932c9c297846d5bb204e90ecc6c8f50cf/merged major:0 minor:1253 fsType:overlay blockSize:0} overlay_0-1262:{mountpoint:/var/lib/containers/storage/overlay/1be8f66720d1eb02b01919141c51432d4344eb99e111746d97361e457d274b6b/merged major:0 minor:1262 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/9c2fe105b3efc623a3dedddacf3bf71571955e25460259cdf0e115f05d5c380e/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1275:{mountpoint:/var/lib/containers/storage/overlay/01f09b16623f64b4b224372ef552651b55708fa1949e66800456c1a86d3a1829/merged major:0 minor:1275 fsType:overlay blockSize:0} overlay_0-1277:{mountpoint:/var/lib/containers/storage/overlay/2ede0e937b142198046cf5c6b42428636a7fe6a1a297ed991baf79901ff73dc3/merged major:0 minor:1277 fsType:overlay blockSize:0} overlay_0-1283:{mountpoint:/var/lib/containers/storage/overlay/e2220df4d111efb33fa07c93b0ac66e0f947281bddc0a67c2bc21bc5249c2e18/merged major:0 minor:1283 fsType:overlay blockSize:0} overlay_0-1288:{mountpoint:/var/lib/containers/storage/overlay/cbff1afe7f2bf92bebc0d66d802bf84d5a3d2f38c4e280fb2cb0006b0e938150/merged major:0 minor:1288 fsType:overlay blockSize:0} overlay_0-1294:{mountpoint:/var/lib/containers/storage/overlay/ee0550c4c7415c9f9d1678dbd00436b2fdb4897d1cc20715528a6240ff5d6632/merged major:0 minor:1294 fsType:overlay blockSize:0} overlay_0-1302:{mountpoint:/var/lib/containers/storage/overlay/f1997bfd3f41aa117773bca85036a282cb56834937da607050934ed4d2e19630/merged major:0 minor:1302 fsType:overlay blockSize:0} overlay_0-1304:{mountpoint:/var/lib/containers/storage/overlay/ba2fc7bc3d177f54ca3fe30a07ce2cb6cf58caf7b2b07459d10c1f66644d207e/merged major:0 minor:1304 fsType:overlay blockSize:0} overlay_0-1306:{mountpoint:/var/lib/containers/storage/overlay/e46ab57bd8b75cad0ec4e7914e8746757a3504bc391f0275c46f8a2029863046/merged major:0 minor:1306 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/1381e4c61fedb03d92430161a0a7167c1044a9d22d292b0121b30073b8755fae/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1312:{mountpoint:/var/lib/containers/storage/overlay/ccac5769d20dcb1e67f10cc938d1cdb24ba23bef60bf65d4d26861f29a77dfc8/merged major:0 minor:1312 fsType:overlay blockSize:0} overlay_0-1314:{mountpoint:/var/lib/containers/storage/overlay/6e0c80d6ed7646d701c52ed404a07b2a684b29da08be88db4aef5cb39e4847d3/merged major:0 minor:1314 fsType:overlay blockSize:0} overlay_0-1332:{mountpoint:/var/lib/containers/storage/overlay/b4417ab898e8664942326924a1c414ff466267b8ebd8b50b213f096a8341d83d/merged major:0 minor:1332 fsType:overlay blockSize:0} overlay_0-1333:{mountpoint:/var/lib/containers/storage/overlay/509f203a8f7216bd1125d8846bf44386f4e9193ce43e916dcb6d10aa787d4acc/merged major:0 minor:1333 fsType:overlay blockSize:0} overlay_0-1338:{mountpoint:/var/lib/containers/storage/overlay/6c37bf1c383832717af885b1ad48f514028d92dc84b7eb4204fb5a56e3f12163/merged major:0 minor:1338 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/bb1ecdff1b9101d79d2e838364cf6eae4d0a1e24bb557cba3bbd9b753e3e783e/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-1345:{mountpoint:/var/lib/containers/storage/overlay/fd69f682a9c734769e6d86907bec66da382de0445e628b3b84a21fde4d1d42d4/merged major:0 minor:1345 fsType:overlay blockSize:0} overlay_0-1347:{mountpoint:/var/lib/containers/storage/overlay/5cc1d07b4d2d9a7024bdad5bddbfbe72dabf6362ce74374a8aa9c5cc91797385/merged major:0 minor:1347 fsType:overlay blockSize:0} overlay_0-1356:{mountpoint:/var/lib/containers/storage/overlay/53e04edfce16c957e0e5a9d5ada9c6b8c33267efe012c252bfdbf165947dd620/merged major:0 minor:1356 fsType:overlay blockSize:0} overlay_0-1358:{mountpoint:/var/lib/containers/storage/overlay/1cf161fe127177bbd559d68d64bee4fbf1c1223cb2b8d68241e9e4ad1251230d/merged major:0 minor:1358 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/cc39dc17da7af77e532a6b6efbc24f4649c66e1f5e6ecfce407e3d007075781f/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/89903e5041a65473506bcc325426954c09ded6f9fce7b50dfbda03b522b4b280/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/6a95603e4612482dff25e972dceec05bb60750bd3dd713ba25908cecfe42a54d/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/ee0126693d85c430e4e3d9c3f8c50278691c671dd12296989ba9a97855780654/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/09f4877e7b77ae7ecb4d9e2650faf903417fca3a84e29813d940db588c1447a6/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/786d68f72c2d7a65fbd5d9d12837a704f7cce488da2cd12a6f1c888a486c1a93/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/f9ffa199bcd215052aa3d1dee48a80404cb197b68cc7021e9bd9cd85ec514268/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/56f9a4a309663d310630790f6c94dab19b68402c177a43af52e8d07f1ad7ab1a/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/38eabe8caeb186fc2c1e38f27c4acf956d27b017c6b7800186fe789e7e7f1f33/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/0d99eafa65819d80815e7955776c38a99fc8b9d97d9248521de413ff2e479e71/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/6292c8c4689dbfabae9661552d85c0cd56b88e756dfdc2d7070496641ff34799/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/2a3bc5306bf485bf2329471b431fbc353a6d2870dd91848c48b38815f64dc270/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/05da7dd6459e4dd420df15779dc67c1ae824647d56277082b8a4c7537c54106b/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/8ab3e76b623f5453296ad0da4bd6a062aeedefd84d024b752f38321544234658/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/1237765ea9c681e8fab60112f21574876ec2c1fa08fa0f593449ae1a1027c181/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/3338cee47616986ce41204a5759d425fc0d1772a9a21b507c5d4939115c9eef4/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/7eb87dc43ecc146c90770714b02cb3dbac743c6e56743e7e0b872de7c7f3f00e/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/7ee4f7b7b139adde1384b6833e7f4aa2c693ca563d5ac52669063d6f2ce59e83/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/91477d085c36d0c4bbef229068c7777aff211c08bfe61d2c2b7a6fd9669a7192/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/817d8ac3beeeadb316574614fc8b2afa018f6d234fce08745b4876dcbbb57798/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/726e467ea7ab9b8997bfc1188d0bdb837919bdefaa19e31ffd795215f9b1596a/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/14a4d5be05fe3fbfc44b89b6a3fa071f42ea5193c5b97e6b3ed9a506115bb761/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-218:{mountpoint:/var/lib/containers/storage/overlay/e1a26234ac618944fe9431990a2a29016a7cf90a60ef133d804569e862c40a12/merged major:0 minor:218 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/8f9850e2e8ee4c2be32d1a81a3786eab0a21f4a1ac2b5952d1c9ad99e16725e4/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/40fb1c0e8dbf216295f0191c7c19167e9f4c151f22dd18abcad2c767e86e7a04/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/6c7d1642617967d4e20b9aba1ac8cd1c9421207b96f47ddfef6ccfb30bc05c9e/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/87fd02204b7dcf2f8075f4592b5924f06bd8b18135e61dd8decaac86c8aee221/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/61a5cfba5b8248c052bfb5fe4733377448c1fc4c5bff8a7fde98cf0c89fb35d4/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/1b39d70b5ae9d7811fb527c1bcd17b3875f1f34a7f02fb559d2762aa78410712/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/9568a1e322f499226b1e56b20f6c30350e71fde514aa31da549e6425a889e8d8/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/5315e2d5d089c10012db99e54658c7ff4cf33a95e7c7d7d5769c2c88a5a87f2c/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/ae0a736452489a97d17a33c3893807846da28f615c9321dd7d1ea3595d995a54/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/6807a0aba409cda3ddb6f14a5bf79732ebe887bd51814532a9485f3c03ed7d64/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/4f20f7826eba9f5e353e9b7faa267fcefb85dedb50760e02b8f2fa0e338a2bf8/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/61bf959b2f745b6aa99293d969a8712b2889d186dbfb007db189b9db2eb728be/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/7d3bf3d3d100d6a4efa220cffa9cd0bd48d464370f46c62817cb7a9c7d14bb35/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/9551628138a78cc083068ea26d624129b51478afacad39fa6532e3cb4b261af3/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/e6fa9dc1e9335833c1cc31069297354a0575b4f77c8683ecd29b750d51cca6d9/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/0ac71b89283e8ab5e6e41eb0d3c0fc34abe7a53f3efcadabe7f37d7c792114bb/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/9d3051a5d15bae819cf3c8e385f5668468aa9f2559e365681bbb84c9ebff96c5/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/555d59115e60c62946caae22b57de97d0a77e592bf51c9e274da116d14c88652/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/5f1bd062bc54968fb879a94a1b66b709bca1f63046cd69e24f9a98ea248e4b7e/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/2b185743e18aa7c3105802ce6fef068f4b7ec80533138ae98fd81df8097f9b8f/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/2ca37455767f7edcf6ec2c88a03b324ad876670c4e5377a23e64982b23decbec/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/4e768e171e24a69b177140508c706ed40a94ebdba846b15a626cff6e7992429c/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/1192cd12d565ef24913738ec2f6a05fb0f3d03d3614eb6e10df610a557edbeac/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/aa28e8d511f7888b2f19cae7fb72e2c422f34e71f2464a500c0fc1db718887c0/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/efd0dcf9bf67d9fcf4954123077922b64ab84ff612bd178ea01ebd5873251e44/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/6cc7f51bf7b04a7859dc65d7487a5f2e6c75d4e8e4bb1ca7cbc4aa2f4227dcf1/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/7188c9a1bb8aae1a910033e8b1ba3b0c8ea7c6a1ac066e1af457cf051dd3f7cc/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/dde7529742630fa4e296f365b8c4fd3806cb38ba088fde9e8a51b05c2d0e31dc/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/97caa2a683f5d3932044b5f3ff6f41ac15f415301d10e95b4dbe22de00e15916/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/97fb010c4ee2892f38a943e63da078f20b50c11cc9ca90f4b3b710a97c97afd6/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/74f4ed4195941bff7baa14cdbe58df479260ec14bad7790709097e4bdff001dc/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/54ef55cbe61ecab8f1953a64706c21951918df4b214d839741f200c158d5235c/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/a420f8f289c0fe458fd75502b1d75cd204dd07e7fbd2cdd6742b9cee1aafd4ff/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/66302b6ef035badd046f0068c7da2108fd227bbcb1ebe68be05c7f510e935743/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/a4d91a03afae2442c250b86edfe0c0f54ea2a88b92e21e7d5a45aa463352d717/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/98e18d181b1e0a971cb7e9a2ac08f982f477b5701f660bd92f22efdfc933dad0/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/563c7c1fe873426f842ce95a28935d799cf8ed463b4bc159050f9e2a57b38d83/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/238bdc4b42e213f4dccd3ecb6c30a62dd07e3c0558c6b1018d391e45c2977e4c/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/c1327d00b7f57fee45c6b4c10b5a6868448c04082fca436b69b765a7dc51d900/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/4fd8cce5e503becac739c48bc83fccca90fb6d240765703545d120d996a9a4fe/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/185b3ac049b7de637242d4ac79fc32581cd9bb18fbbabb978265a172a52e3ca8/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/8a7478af61acf8459975ec2daa337564e8fa0b54bb339022b86925c0938b7af7/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/26dd1619dd34d197f27fd24e8d322eddbed6fb1c5d5544a62121a5b3e9508561/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/13a3ffe28fb9b6a69c2e6e47c9d31d06531773eb28331a86d3f3893af6bc9673/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/2a72a907207e7dfba76f4941152500b99b081ba39aa959a1ae5eb7bf76eb9ffa/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/f664a163da96f9655cb7f94d428618dea88399a0b8fe865b7792bd08f29e3ce6/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/74e0f368f87fc00c358865fbdf1d1f975d5718c8d231ef24bc3936ac5136eb22/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/d92f2fd9f707f61236c884f332ddecd74868734a5478ecf27eea87a2610ee007/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/8e4c608ed911fe7d6b48707fd8438ec93ccf3a3447b547c0c61e4bdd221ad697/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/e36ccfdbd62060f93033f79c41fdb9b1eec630d8f9e1a8b9f30d3a2b471c0263/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/5889d547ce4135f1a2a6e27b6c16506e84d9c8230bee55e9958304369feec6b0/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/47fec488e2d0450a0b0b4fc0b8504d397d1314931f949ea9d0c3cea1997887a2/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/246f45cf961d0e078dabb221572cd70ac06a073c4b7837c4e5bbb9699bb32bab/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/b90d4ea027a76960f33e2a187303fe8b1f9c1120567075fe49234a08ecb53cc9/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/683195d21d10120818d8cc07ed0c6a6e293c1c2063020d840545fd66138111ba/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/aea3fc2d3e1734bf5c409cb14d2b3aa22fbe8c6e238e78bb72f865d82ad151ae/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/1dac621e41c2ef02ce4a6ea85e650e769b59949aeeafc9a6e6bfa20c8ddc90d6/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-455:{mountpoint:/var/lib/containers/storage/overlay/6f90c690c5a7f122b33913541ecde154cc3dc4e69de31733883b34a22d1af6c7/merged major:0 minor:455 fsType:overlay blockSize:0} overlay_0-460:{mountpoint:/var/lib/containers/storage/overlay/07f6d14ba16735ba2d09ece3132ead50b64604e9483dc80faf3189d1c7a14606/merged major:0 minor:460 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/c86ae49b7ea99fa370a8a87f4568f9648bda5e79f1fc9ca724876d34d0efff8d/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/14ec8f20437085c3e571cf74d786e2bc695eca11b834b3aef669568e0b2492a5/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/b1fc3c3e9e48a877a68e6594c2f83e20990500a46d33c60c431057d89612a836/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/f2716354ac9cdd4025b00bba289b2b600b8c56fcbc2d989f54316cbfbe9f328c/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/0f9809f1b4d90953f954ec245936569cb77846d717cf59fda8423b677ef45604/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-526:{mountpoint:/var/lib/containers/storage/overlay/339100e3282da7561562d39f054966698acfb2caa94dca1189bd0e42a734f152/merged major:0 minor:526 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/556f664410da3fdb0d1b72fe8a2415e10cfca590a4972c8c72ae56d27c03fda5/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/5afae9192875545574c85dbf0a915d2681de0955d2e3e3e8b06be0faf62cd376/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/5e14c1d99cb76932c1933c493b3be123f6dc55a8c4a99ae83795fcb64cc6e6eb/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-533:{mountpoint:/var/lib/containers/storage/overlay/56a04b527ffed01c48c43a1920744b4c1e462128aa36a5b82b4555decc579bc0/merged major:0 minor:533 fsType:overlay blockSize:0} overlay_0-535:{mountpoint:/var/lib/containers/storage/overlay/56be532fccf9009946e27479540dbec1768496e4aaa3a58020816c7be4162c3a/merged major:0 minor:535 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/l Feb 17 15:15:38.236559 master-0 kubenswrapper[26425]: ib/containers/storage/overlay/58756d2917cdafe77ebb64745e742b0d5a12985e722f41632b510fec5f6236bb/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storage/overlay/3b85d274f22c452483d35dd6b11e8ef08d0f0864365aaafb0b6329c89fdb838e/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-544:{mountpoint:/var/lib/containers/storage/overlay/bd38d6ae2104623ce909c227600ae977bff7ae6e5bc863004c2abda359bc3647/merged major:0 minor:544 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/e2d2e4aed9b623174a5af8e2d4cb22d7e89075260b533e0b5a044f8a6f45a471/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/40f9c410103aefa9c6192117f9c769de439fbd709c3fe9d9f73bc14b1611e40c/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/f5343681998563cce31607b97216ff95c3341992b8e26dad72bce3320343c05d/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-552:{mountpoint:/var/lib/containers/storage/overlay/01ba8969a1d9e3b1ee91a5ce6706b5446ae281db384d5bb1f80345fe2ddc153c/merged major:0 minor:552 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/011e271e3d39b8248b94bc9d369c7f7d3529a4d14bc53811d06dea216079ed7f/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-556:{mountpoint:/var/lib/containers/storage/overlay/a4a2a2dd4fccafdbabdac470c46c6430bf9a9092f00e5fcd31d2ab3d59dbfa87/merged major:0 minor:556 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/f3643f1e291de85f3d531e8d6806e752d79385c41d77b4d5059bddbb7f7a3303/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/a1fccc3b6aa1b42bc670ec4eaaec7690bb1a389be0689f5708c3c550967fecb2/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-560:{mountpoint:/var/lib/containers/storage/overlay/7fdef9932c6c4506e6fceaed71222a4918d10dfbc5a10ed8f1ac0d8bf55bc6fd/merged major:0 minor:560 fsType:overlay blockSize:0} overlay_0-564:{mountpoint:/var/lib/containers/storage/overlay/cff4fb11ef5a13c776f52748297044d5f3d9d2cdd6c1077cc96b2a13d2f420fe/merged major:0 minor:564 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/96106fd1949659942765ee2a0bc2803033307df5683f1265f62bc83447b3d8dd/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/ea801831876888dea458244959536bd6c7c100a1ab59caf55607f459cb57ebd7/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/151b28af37bfe6e72c554c2c86d13fee927b80fe04182943cfc28d3994cf6d47/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/555cf7486c37fbb699454b6fa936bf11c04fb45a29789141ce6226115616e65f/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/6da869f285a0c5e6facf25fee45f4a194f660d950f1791bd3f66b055153a758d/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/a567f7cd63e55476cf94ad2a53d4720f8e53e59484d409564649c9e4f2b2c8d3/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/f2afa1c175fdb2b1817088080c15b8f261d52c81e2df41772cf861bf9cb82e6c/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/acc246a5fec3e89452c2343fa4ce15867a409b6e192af1301d63efd4e53ad3be/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-617:{mountpoint:/var/lib/containers/storage/overlay/50b4a9f2d9940389a2441a62f1b6bd9d066fc0d94c43c43837173fea6e90fe97/merged major:0 minor:617 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/56d1ae059539cbf8f01727c751b29fe9cfc2474b397e073e677c7109785f6838/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/459885ed86e0cc3440014902ca7bb0c4412df2ba44a9114ccf75670ffa5c6299/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/f524cfbd7c9ee329b7fa4b1f31120090cb03c106a3728dc5949b2a67dca4a908/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/42818c0dd5f934d2a32006cfe8394d988f6283cfa42b0ed408f1cc5a4373c5ff/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-630:{mountpoint:/var/lib/containers/storage/overlay/d5f44eff3c2f240a55baa6772e5787f86ab25ff31ef25203ddff10342240020e/merged major:0 minor:630 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/f11655f4aa3f1441ea5f170e20deb4baa76f6b8e0e706b4e76c53ce162c9a6aa/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/a85c358f019d1c7cdad58dcb48ebe1e6ebe11c2d3bb168ab00ad9f387e71e335/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/9050a7f5a2aa3e2614eec07ea3c1f5dd503c175d6dc87edcd72f272ec176b502/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-646:{mountpoint:/var/lib/containers/storage/overlay/0ea3540f8193da18ce6e7f0873513c4393b1bd54c625a6103a078bc7175dd728/merged major:0 minor:646 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/92c932d5fdeb838ed2c5c3608b385ec098bf27cd3e0706fd1d88e0c1fb8365b6/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-649:{mountpoint:/var/lib/containers/storage/overlay/00e5b98ebda882648e105f16e4c53321e9add15577dfb1e099ed4895f7e967f9/merged major:0 minor:649 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/9ea272b1a6e359419fdf745d827a49655a0f00af118ca6a57abd9158864c5499/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/f2e64d815a130198e9e10a42ad054546f17751ce531eeca90079fe39d2221d7c/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/1a201187dde42cbce4616e60f246177c795cddb1456161a1a4f135b5b741e847/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/b7cbd5a1a7db0a994f9801c76da4a7273230c220cd8997c84f925a7215844dc8/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/6fed15fe86e215f84854651f2df83f900f6117ccc661a5a94b6700b951995fba/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/5d31ce05193eeee9d24db95845c5dfc14c377954fab42d3035a18a9bf9618936/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/97a3a06a2672b440580a95f560a500cf030af0dbba690d36216f13e22f977627/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-674:{mountpoint:/var/lib/containers/storage/overlay/7ba20a512c6536407ceae2772a8208e48e0d737eb75fafbdab83e9a58497e046/merged major:0 minor:674 fsType:overlay blockSize:0} overlay_0-677:{mountpoint:/var/lib/containers/storage/overlay/8677506af72eb69b4b276967890600e9e1fc739eafb822cf3f099f30eea2fed4/merged major:0 minor:677 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/5fbaca1e7fb6a7661de866fb13c0423105599d2b3c192bfe6e931ac9836d57e5/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-685:{mountpoint:/var/lib/containers/storage/overlay/a1b0f51ef6e5575bb2b332b433b05eec8be553a8edf64d21a8ad3e9c614a3780/merged major:0 minor:685 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/1e24295a8ba96b96b3780b809d0afedf0a495a336d831eb635439b4d48c90f62/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-689:{mountpoint:/var/lib/containers/storage/overlay/0f3d485ff1ab435142fdd3b2b3c2c8ead7be24bb59e7b213cd57cf5062c84b8b/merged major:0 minor:689 fsType:overlay blockSize:0} overlay_0-691:{mountpoint:/var/lib/containers/storage/overlay/5d34946d63fcb5c20637ee1d1dcabbb40f0ac2c091e48ec6005c6f66e865ea6f/merged major:0 minor:691 fsType:overlay blockSize:0} overlay_0-693:{mountpoint:/var/lib/containers/storage/overlay/c5596b92a657c824049a98657578e7e6664fc996a9f9c71cf6c4f886f214a941/merged major:0 minor:693 fsType:overlay blockSize:0} overlay_0-694:{mountpoint:/var/lib/containers/storage/overlay/b85eae464b402cf4be59bc1a7956becb5575cf0e16112c6b11fb415afb9a1ef5/merged major:0 minor:694 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/0bddde58649267ce3845c924d140b4c8a545fed3f64ba0df14d516306e4de4fb/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-703:{mountpoint:/var/lib/containers/storage/overlay/c0dbfb0789e6d68aade91e703e7f027e395dfc7cc59722c41a2f64824d1dd03d/merged major:0 minor:703 fsType:overlay blockSize:0} overlay_0-707:{mountpoint:/var/lib/containers/storage/overlay/4917da980e0fb6dbb9e85caa4a81537171bd1f48d367b59d3721ee3e61d9b59c/merged major:0 minor:707 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/882bdaa6220c946c7a30f4a05aa6676344524552151de75f39b0c9a180a70b1d/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/68b4016f879f90ed2fbfede04808e8f861809d783938e939b8d40ee88319a94a/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/5be6ad5b4f452c47d4301ad98e5951f7fcb9cd660ef59163935c7a08cb840579/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/89aba7c225f588c0734c9e2e8b5db87885a057c59a9a07ed5dbea8ac7d5e84ca/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/0a7cdfc03ad2a28ea9906c81039011fdb946918eeff4204207969680720e9e34/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-732:{mountpoint:/var/lib/containers/storage/overlay/0e8375d7e206fdabe493e66573c2645f6fd6f16b45c6ea2e11fc0742e84f9b8b/merged major:0 minor:732 fsType:overlay blockSize:0} overlay_0-734:{mountpoint:/var/lib/containers/storage/overlay/3c390157644f27e37fe3c1baf585186f1d93f405ba052a8c1294bb9aa28ddcdc/merged major:0 minor:734 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/5a646186064c25499aa2c914e03532fd15b73693f21dd0521ab3d60d025b8eb3/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-745:{mountpoint:/var/lib/containers/storage/overlay/950dcb66779ef172954b41e752eeaf14cd628488e5a53cd89c05c3e890eb6bd4/merged major:0 minor:745 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/61be19bdcdda3d2daa7ed299d9775379d5b78581862b138cf6f143a972d5a5f2/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-773:{mountpoint:/var/lib/containers/storage/overlay/18a94124b681b3ae3d95c8076567c6c342b933bc39f492a0b7445270f60e41b1/merged major:0 minor:773 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/397139feef189fdc9eb5d595f7d3d02e5d2bd6860c0f3f406cade9faca25b090/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/48bc6526b1c1e3f95604c488f656b790fdb1945d485ed0c89324b4433b7b115c/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/2e4d4e6e99836b9df1f44665299e0b892572ffb6a837399a87ad52a32b314249/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/0c8b4fa5a12c027d7a8b2f3e517ecbc1c36b104c7d2884db20f8d54c60e4e036/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-795:{mountpoint:/var/lib/containers/storage/overlay/0ec283688c3c2dfa4b1c41f62e3347448225f4e5757bc68e5f821e21bf80f6cc/merged major:0 minor:795 fsType:overlay blockSize:0} overlay_0-798:{mountpoint:/var/lib/containers/storage/overlay/e5c5d80f19cb2eb1fdd82cee9c901d9b882fe2333d21b8e4346922e7a5fe9f7a/merged major:0 minor:798 fsType:overlay blockSize:0} overlay_0-801:{mountpoint:/var/lib/containers/storage/overlay/cf019e5dd9c5314b387328a0e79d5a0b9acdce021f4788cab56ed93067855ff7/merged major:0 minor:801 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/f5511871d837b2e655a201b2a5a39bde07bd52177c016eafd6fd44587b65b8e9/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/bc04fa4a1991c1a2128ee2825520016783f7d21a313261c660672316381027a5/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/f43607d8020dbe2981de5288d8979defb4b5fd8695d47efbc27e706bda2c69da/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/1e9f2cf3e9f44a80a59406e5e4f78442649e61201fb0a818e85b867fe26ee84c/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-819:{mountpoint:/var/lib/containers/storage/overlay/7eb88ea0c4b68bda912abf7ba0ea375879842fe0ec08ede12d4f1dcbb53eeb9d/merged major:0 minor:819 fsType:overlay blockSize:0} overlay_0-828:{mountpoint:/var/lib/containers/storage/overlay/fe69a6722ec43fdb802589c98d3f671271c4a00a00a716e709c8e669d3b75b6d/merged major:0 minor:828 fsType:overlay blockSize:0} overlay_0-834:{mountpoint:/var/lib/containers/storage/overlay/e25bcc0bb76cbc2f5e3870b98f9e7be5b913b82119c73dc04dd94986669e4cc4/merged major:0 minor:834 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/1445346e09972928dad97a1897c02b4bde9447bb7e4691453376bf713763a05e/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/6bc8663940b7a25d9449887e68fd48c7b6399df27e401e450edd92a9c8561de2/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-841:{mountpoint:/var/lib/containers/storage/overlay/0af47dfb39d38cc9ff93cd42b4261143ad24b5f2e273ae2673a51b71f011817b/merged major:0 minor:841 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/8ed1b1f9260e5dca9fa5b98347ebda32bc66396dfe1dd5cc3e5736f70fab9c9f/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/a4627bb6a5971acb23943975d19ea1a16c23419ba69476d85acc9933ea3424e5/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-872:{mountpoint:/var/lib/containers/storage/overlay/eaf3e1649aed194a88e159c1667b7a378582606efc519e3629bb5cc1fcc485d9/merged major:0 minor:872 fsType:overlay blockSize:0} overlay_0-886:{mountpoint:/var/lib/containers/storage/overlay/090f42ca7f206edcee58a08df79a276c6a4f11dd1a5ae4718970b55945aae8b8/merged major:0 minor:886 fsType:overlay blockSize:0} overlay_0-888:{mountpoint:/var/lib/containers/storage/overlay/b090609bc6b12d4a9117d0a064a661106b0e7b52b1183b7c174a9088507815c5/merged major:0 minor:888 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/a34f7ebb9a42eced72e637405d923f26fdc3056a2240a07458dc0b875056e2f9/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/fb084c2d0a6092ae21ecd3d9fc18ebcaffff97f7523471611f367788f738bb2a/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/d7f5594a4a9eece53996021f582fe74a94ae7119fa85821e3164f57a4949e8f7/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-903:{mountpoint:/var/lib/containers/storage/overlay/61f2a8263df34c3218eaf73536c909add7e7c592b9ab3f303150631991035a4d/merged major:0 minor:903 fsType:overlay blockSize:0} overlay_0-906:{mountpoint:/var/lib/containers/storage/overlay/d1c9ea5b0d52844774a1df03a6ed5f7c61841eb10b68c69548b468bc60486ba1/merged major:0 minor:906 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/7f33dc7835c927c1351eeadf3d5f9af5cd8d18e5a1f6b015ffc561a58592a5ff/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-934:{mountpoint:/var/lib/containers/storage/overlay/47622dbc10c44953b572200fce0a71487ad92740b14cb757655c82b84f6622ff/merged major:0 minor:934 fsType:overlay blockSize:0} overlay_0-936:{mountpoint:/var/lib/containers/storage/overlay/9f2981a82ea4ce48423063e7330e5b556c561555712e3d730dc50b32358021f2/merged major:0 minor:936 fsType:overlay blockSize:0} overlay_0-938:{mountpoint:/var/lib/containers/storage/overlay/131602fdc705ad3656a2093625e1e00ca1c02d6e873024811731d06986e59667/merged major:0 minor:938 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/fbb9e86d2c9775a65890c02c3ca7c66f8b1ccd4719d0df26bb410410a9dbee19/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/474870da5879ee1e282fa3407a885007491a2b98a08d31ede091b59e0bf309a3/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/1dfa57ca269f6c5bf5f928470a9dba2457bbd065c60d4b8a23abf40cd7f4591e/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/afd61dd3bcbd95682d55d2d2e645f7dd6ca071cc7b0cd4391fc4b3b06de39f3d/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/1cbf05f78bd729db4491080dd4f385a458eeb3d8a9b7ae6c8448102b79a8e2de/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/1c5182f4d59a554449e155e73f9ca898dfc92ab68161896844929c5a6be5de53/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-973:{mountpoint:/var/lib/containers/storage/overlay/0d4611b5db098e5665cf651eb1f5be9c25e3cd8c4499dd3f031431be74876bfe/merged major:0 minor:973 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/63e4add2f7213af306fa5ac2b67e65587475369b7744341aa100c44a5d482a66/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-978:{mountpoint:/var/lib/containers/storage/overlay/52a85a71f8381d25cb93ffcda23675c7e121de75a85f2e28c8eb6654542d2cd1/merged major:0 minor:978 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/b714c57d74f7c8a51aad01e75850a64415abd20d78003e40f882732e9a2c0073/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-986:{mountpoint:/var/lib/containers/storage/overlay/8fedd4130226e2ce8bb03254f9839b8f8fe1382a9de8c0e85265f860f6ef37f1/merged major:0 minor:986 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/50885fce2fc2285544d57628c53e8f818d5574675a6f8b05f8a5ccbadba72fdc/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/40d2ceacb2e070395fe21ff7870b169410b17417bd9898a455a4b14ccc594e77/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/b025d7065564ef4eba1f36c9ab9c248aeecb56d6c1475666a076a569402056fd/merged major:0 minor:992 fsType:overlay blockSize:0} overlay_0-994:{mountpoint:/var/lib/containers/storage/overlay/1a6f2d4ca1e3c0d39eaf90ad37acd5f1d614fae13eb5101af75d96d9e90200df/merged major:0 minor:994 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/6cbf8e5a01f67d1396e5ead157fab1c8add663adc12cc536afd7869d56b209bf/merged major:0 minor:996 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/260dac9f9dc74221cf026b4e2cf9ec7e027178f3bb25d9b4bda26c09aeb4260d/merged major:0 minor:998 fsType:overlay blockSize:0}] Feb 17 15:15:38.289327 master-0 kubenswrapper[26425]: I0217 15:15:38.285950 26425 manager.go:217] Machine: {Timestamp:2026-02-17 15:15:38.282636279 +0000 UTC m=+0.174360177 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ff628177d0ed41fb9732e0b0efb95e0a SystemUUID:ff628177-d0ed-41fb-9732-e0b0efb95e0a BootID:1c90f5ae-c817-4d5a-b4dd-067c150502f0 Filesystems:[{Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1200 DeviceMajor:0 DeviceMinor:1200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf4ca08876e89c113fcc009804049d8ec19b6a489b50574b76595b73486b7936/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31de4b8284b14c5b1bbb2ee4e5ce05c9d7231167ee625f5a71f3b94980671845/userdata/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-533 DeviceMajor:0 DeviceMinor:533 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-773 DeviceMajor:0 DeviceMinor:773 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1333 DeviceMajor:0 DeviceMinor:1333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:590 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:730 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:471 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:491 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/50c51fe2-32aa-430f-8da0-7cf3b9519131/volumes/kubernetes.io~projected/kube-api-access-8g48f DeviceMajor:0 DeviceMinor:584 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:923 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1085 DeviceMajor:0 DeviceMinor:1085 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-798 DeviceMajor:0 DeviceMinor:798 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-617 DeviceMajor:0 DeviceMinor:617 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1036 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/026610117c01997654c9e952b5a30927858c6efbfd458d75332f24ab296e1898/userdata/shm DeviceMajor:0 DeviceMinor:1235 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1253 DeviceMajor:0 DeviceMinor:1253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~projected/kube-api-access-6t2vg DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af54fa9c62b28e67f68bc78aa9667df2cc9eef72a60d8febb3ead750686eb226/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1233 DeviceMajor:0 DeviceMinor:1233 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~projected/kube-api-access-lw6dc DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-801 DeviceMajor:0 DeviceMinor:801 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4da475428a7f62dfe7d403b74dec1f34a8023a64243ff1dae7d9b66e78408144/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1161 DeviceMajor:0 DeviceMinor:1161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~projected/kube-api-access-mgwfb DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:472 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7039f4f79e0da973650e82a180456282f520c1801cf5f3f024cba6892c24045/userdata/shm DeviceMajor:0 DeviceMinor:290 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1214 DeviceMajor:0 DeviceMinor:1214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~projected/kube-api-access-d8wxf DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~projected/kube-api-access-spcf4 DeviceMajor:0 DeviceMinor:804 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-707 DeviceMajor:0 DeviceMinor:707 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5/userdata/shm DeviceMajor:0 DeviceMinor:1349 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/kube-api-access-4lwz4 DeviceMajor:0 DeviceMinor:524 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1338 DeviceMajor:0 DeviceMinor:1338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:593 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-834 DeviceMajor:0 DeviceMinor:834 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/564e010b4acb371ea5e896019bc8692ecf42f40acab59fc53fd175dccbfd8d9f/userdata/shm DeviceMajor:0 DeviceMinor:966 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1172 DeviceMajor:0 DeviceMinor:1172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aebd0546beb5f26027662152b9f3fbf064714cf96a6113f61f98182131ca4a45/userdata/shm DeviceMajor:0 DeviceMinor:1208 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70e43034-56d0-4fb2-8886-deb00b625686/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1350 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1698c2cc5bd5ca4b021102d13c99be9074c3ec259c76c5314910f3a09569a96d/userdata/shm DeviceMajor:0 DeviceMinor:901 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-903 DeviceMajor:0 DeviceMinor:903 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-646 DeviceMajor:0 DeviceMinor:646 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6968fe4893506f2c7eff240b0f99304a06f7947186a1a85995eef13747cf455c/userdata/shm DeviceMajor:0 DeviceMinor:495 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b654a908d6c1613bc2c0e365ea3089a784b0763c8a27f9b68976fba5622c284d/userdata/shm DeviceMajor:0 DeviceMinor:598 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:1120 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/50580897aab729847bb16b1be89c08ccaf45ebad432b32e9d2c48074ace08db5/userdata/shm DeviceMajor:0 DeviceMinor:771 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c15c55254b60eef4e6f082f6ebb85ff7cc6e3f7a7f4e7b7ce280e5a616be4326/userdata/shm DeviceMajor:0 DeviceMinor:724 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/260124ead6b34d5e3c90fbb769ec2cf0de3926cb1ef0da2632429f164c63d3f5/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:945 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1148 DeviceMajor:0 DeviceMinor:1148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b52356412bf9fd67c8890a1f481f22c4b980d0a142cbe7f6af8b97d5f5816dbd/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:924 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-934 DeviceMajor:0 DeviceMinor:934 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1306 DeviceMajor:0 DeviceMinor:1306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~projected/kube-api-access-gswxb DeviceMajor:0 DeviceMinor:422 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1332 DeviceMajor:0 DeviceMinor:1332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1096 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:922 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~projected/kube-api-access-7pn82 DeviceMajor:0 DeviceMinor:1021 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:489 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-630 DeviceMajor:0 DeviceMinor:630 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-994 DeviceMajor:0 DeviceMinor:994 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~projected/kube-api-access-jhm88 DeviceMajor:0 DeviceMinor:1087 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0760e00b932363042782ba956e380d806e3d87e24d2f82f4acd8b411bacdc365/userdata/shm DeviceMajor:0 DeviceMinor:1125 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1153 DeviceMajor:0 DeviceMinor:1153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/volumes/kubernetes.io~projected/kube-api-access-cpq86 DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1277 DeviceMajor:0 DeviceMinor:1277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1312 DeviceMajor:0 DeviceMinor:1312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:442 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-691 DeviceMajor:0 DeviceMinor:691 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb153362-0abb-4aad-8975-532f6e72d032/volumes/kubernetes.io~projected/kube-api-access-7bzqs DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ac3405a44e64442f5f84de1f2fe4affb9bf6727f46c3097b260717adce5a4719/userdata/shm DeviceMajor:0 DeviceMinor:345 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:487 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52b28595-f0fc-49e2-9c95-43e5f1eb003f/volumes/kubernetes.io~projected/kube-api-access-klfm5 DeviceMajor:0 DeviceMinor:394 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~projected/kube-api-access-j5w6f DeviceMajor:0 DeviceMinor:1229 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-888 DeviceMajor:0 DeviceMinor:888 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~projected/kube-api-access-4rcj2 DeviceMajor:0 DeviceMinor:1228 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-552 DeviceMajor:0 DeviceMinor:552 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~projected/kube-api-access-9g7zh DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-693 DeviceMajor:0 DeviceMinor:693 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f95c87-6a4a-44f2-b6d4-18f167ea430f/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:421 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/298673e77b46ac4f7d905ff32814664148ad0db661cddcaaee10cf189d3684c5/userdata/shm DeviceMajor:0 DeviceMinor:499 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e259b5a1-837b-4cde-85f7-cd5781af08bd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-455 DeviceMajor:0 DeviceMinor:455 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1063 DeviceMajor:0 DeviceMinor:1063 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e/userdata/shm DeviceMajor:0 DeviceMinor:1270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/727f20b6-19c7-45eb-a803-6898ecaeffd0/volumes/kubernetes.io~projected/kube-api-access-bpwhf DeviceMajor:0 DeviceMinor:331 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-535 DeviceMajor:0 DeviceMinor:535 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1288 DeviceMajor:0 DeviceMinor:1288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-734 DeviceMajor:0 DeviceMinor:734 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3b6a099-f52a-428a-af09-d1842ce66891/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1335 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-649 DeviceMajor:0 DeviceMinor:649 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-526 DeviceMajor:0 DeviceMinor:526 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-677 DeviceMajor:0 DeviceMinor:677 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da06cfcb-7c78-4022-96b1-d858853f5adc/volumes/kubernetes.io~projected/kube-api-access-xpsd7 DeviceMajor:0 DeviceMinor:927 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-973 DeviceMajor:0 DeviceMinor:973 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1057 DeviceMajor:0 DeviceMinor:1057 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:523 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~projected/kube-api-access-t9wh2 DeviceMajor:0 DeviceMinor:919 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1347 DeviceMajor:0 DeviceMinor:1347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c9e969e18b1411cff6ba15e9601c6a1a570693b9fa41b729154f36c3d4cfc86/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c33efa80-fbeb-438a-86e3-d22d7c12d3e9/volumes/kubernetes.io~projected/kube-api-access-zr2dv DeviceMajor:0 DeviceMinor:46 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1098 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~projected/kube-api-access-d2tcz DeviceMajor:0 DeviceMinor:494 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-978 DeviceMajor:0 DeviceMinor:978 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:984 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a576c816a4856d1ffb304e4f810329e8d6608ef0502c0b4373fab4f3b3f5101a/userdata/shm DeviceMajor:0 DeviceMinor:1203 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1101 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:484 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~projected/kube-api-access-l2d4n DeviceMajor:0 DeviceMinor:952 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/086d9bb4b9a7ac8b6af3cbff40a452b0f16d3de1089172ce89af2a258294dacf/userdata/shm DeviceMajor:0 DeviceMinor:539 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16817c879758d5dca93902f6417f76df9adc387ff018e7fa4b42bb730dfe7417/userdata/shm DeviceMajor:0 DeviceMinor:824 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-841 DeviceMajor:0 DeviceMinor:841 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~projected/kube-api-access-8cx29 DeviceMajor:0 DeviceMinor:1074 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fce9579e-7383-421e-95dd-8f8b786817f9/volumes/kubernetes.io~projected/kube-api-access-7brbd DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68f6c5cb6453d46aa30d342c53404fb01aa054a3d48f9b074af6e17af00f9a94/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09/userdata/shm DeviceMajor:0 DeviceMinor:1300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6af13ec50eaaf18a25827e26c3ea1670c47ef4c0aea537a274e7191217763a74/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/129dba1e-73df-4ea4-96c0-3eba78d568ba/volumes/kubernetes.io~projected/kube-api-access-rbmb9 DeviceMajor:0 DeviceMinor:410 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656/userdata/shm DeviceMajor:0 DeviceMinor:474 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ed78a9839985d5d2408f3da695d76e5290df2767573b14d7ae5d1aa3204d65a/userdata/shm DeviceMajor:0 DeviceMinor:1077 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1139 DeviceMajor:0 DeviceMinor:1139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/94f5fac8-582e-44a3-8dd5-c4e6e80829ef/volumes/kubernetes.io~projected/kube-api-access-cpmdw DeviceMajor:0 DeviceMinor:629 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:488 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a592584f1d491ed515603e4859ea07fdb301bfabbc222443eff56b510fc57717/userdata/shm DeviceMajor:0 DeviceMinor:1163 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:462 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1262 DeviceMajor:0 DeviceMinor:1262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1136 DeviceMajor:0 DeviceMinor:1136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:591 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1043 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9a905fb6-17d4-413b-9107-859c804ce906/volumes/kubernetes.io~projected/kube-api-access-mgs5v DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/57edd3b523cd1b85d285ca94528fb2e1279d3c9bd1b74461a1727888cc91ac92/userdata/shm DeviceMajor:0 DeviceMinor:503 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b7d1adb-b23b-4702-be7d-27e818e8fd63/volumes/kubernetes.io~projected/kube-api-access-cr7lv DeviceMajor:0 DeviceMinor:913 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a/userdata/shm DeviceMajor:0 DeviceMinor:1343 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/086a5a64a12e3769988f4ec34ed2d0887c71f02b30e735e84ddbfdf4eb16618d/userdata/shm DeviceMajor:0 DeviceMinor:928 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec0152f98764cdbb982d9d6afbcb74cd9b99357115a9c691e939ad71b14ad183/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:876 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af61bda0-c7b4-489d-a671-eaa5299942fe/volumes/kubernetes.io~projected/kube-api-access-jt7w4 DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~projected/kube-api-access-jpgqg DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1168 DeviceMajor:0 DeviceMinor:1168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/volumes/kubernetes.io~projected/kube-api-access-gxjqf DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a48fa419617a63ec8e2935cb2e257afe77ca02b6d759f71cc3cf2b3946d190c/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-732 DeviceMajor:0 DeviceMinor:732 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61d90bf3-02df-48c8-b2ec-09a1653b0800/volumes/kubernetes.io~projected/kube-api-access-5wbvx DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:895 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~projected/kube-api-access-sj92w DeviceMajor:0 DeviceMinor:1269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1302 DeviceMajor:0 DeviceMinor:1302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~projected/kube-api-access-jcb68 DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1283 DeviceMajor:0 DeviceMinor:1283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b616967df2f9b9831e325809cacecbe30b62dd3ec32bcf016d1563ff3ad31860/userdata/shm DeviceMajor:0 DeviceMinor:408 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9858df9f585446eefac53619f522937c2be744d976350b3d2fae4ea17d7449e/userdata/shm DeviceMajor:0 DeviceMinor:875 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/509218f044076ea16f2a86823735e4d543562d1744406223dc68c1c720aa876c/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1345 DeviceMajor:0 DeviceMinor:1345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-218 DeviceMajor:0 DeviceMinor:218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82a4950a547d0a59e18c269c45642d4e42307ae5014626ff584ece03ffa671c2/userdata/shm DeviceMajor:0 DeviceMinor:599 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8385a176-0e12-47ef-862e-8331e6734b9c/volumes/kubernetes.io~projected/kube-api-access-lnnxm DeviceMajor:0 DeviceMinor:926 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1180 DeviceMajor:0 DeviceMinor:1180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ae9c7ad8143a0b1cfbbc04f9419df3b288d0c3ef1448b00390641786802dac4/userdata/shm DeviceMajor:0 DeviceMinor:505 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3cfbf80866e1ffdd35b49c1ad868e8dd39bef071d0be58efd7099ec81a6c339/userdata/shm DeviceMajor:0 DeviceMinor:642 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d973c9bc-8097-489c-9b8b-70b775177c41/volumes/kubernetes.io~projected/kube-api-access-gkb9r DeviceMajor:0 DeviceMinor:1046 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f085db99c3eb79269fb1e6fd494d3581c1cf5a588e1bb05f613f668bdfc997e/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a3a77a00a966d03623fbb6190f7a54610fa74ee604fa29802c44b60a21f260b9/userdata/shm DeviceMajor:0 DeviceMinor:509 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65d9f008-7777-48fe-85fe-9d54a7bbcea9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a489b2f48772d80be863a6db3f491f779fbf0d6ac9f7d5ba2c4ec793715f4de/userdata/shm DeviceMajor:0 DeviceMinor:932 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30157c99e347dac95082456d5e90aaa231761068887f6a65d5089463dbf44226/userdata/shm DeviceMajor:0 DeviceMinor:1231 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d075439c-721d-432b-b4f9-9f078132bf92/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1040 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1210 DeviceMajor:0 DeviceMinor:1 Feb 17 15:15:38.290257 master-0 kubenswrapper[26425]: 210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-460 DeviceMajor:0 DeviceMinor:460 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5864628e0f7acbb3a1150a63134adcb1c6b05e8c9b623b722fd4249df83d522e/userdata/shm DeviceMajor:0 DeviceMinor:502 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/68954d1e-2147-4465-9817-a3c04cbc19b0/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:536 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9/volumes/kubernetes.io~projected/kube-api-access-562gp DeviceMajor:0 DeviceMinor:112 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c734c89-515e-4ff0-82d1-831ddaf0b99e/volumes/kubernetes.io~projected/kube-api-access-rddwz DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/volumes/kubernetes.io~projected/kube-api-access-8xbnc DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c8646e5c-c2ce-48e6-b757-58044769f479/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1129 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31e31afc-79d5-46f4-9835-0fd11da9465f/volumes/kubernetes.io~projected/kube-api-access-jh2m4 DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-828 DeviceMajor:0 DeviceMinor:828 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-936 DeviceMajor:0 DeviceMinor:936 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1298 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-795 DeviceMajor:0 DeviceMinor:795 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/187af679-a062-4f41-81f2-33545f76febf/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:486 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~projected/kube-api-access-k8ckv DeviceMajor:0 DeviceMinor:791 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1244 DeviceMajor:0 DeviceMinor:1244 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90185a33c5824935ed29e0663472f7e339a5f2977a9bf3a460b9dc4b17b433c5/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1206 DeviceMajor:0 DeviceMinor:1206 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~projected/kube-api-access-qzrph DeviceMajor:0 DeviceMinor:912 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1041 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1294 DeviceMajor:0 DeviceMinor:1294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/aa267e55-eef2-447f-b2ff-57c1ec2930be/volumes/kubernetes.io~projected/kube-api-access-nx8s7 DeviceMajor:0 DeviceMinor:763 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bef471f18c3a5fc8cbfeb510c0e87f5bef875fc2331927f07cde13d3315509be/userdata/shm DeviceMajor:0 DeviceMinor:930 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93996d5f48081a9791fdf6e6762201dc4779ca732e535e3274b5773782da8cf9/userdata/shm DeviceMajor:0 DeviceMinor:1051 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1170 DeviceMajor:0 DeviceMinor:1170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc216ba1-144a-4cc8-93db-85ab558a166a/volumes/kubernetes.io~projected/kube-api-access-7gwpz DeviceMajor:0 DeviceMinor:47 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~projected/kube-api-access-f54vt DeviceMajor:0 DeviceMinor:1299 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c5c50866e3cb4c94d1db9f4dadfbc576e6ef20acac9999e34844dc18779f223/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d56f334-6c7b-4c92-9665-56300d44f9a3/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a2d6e329-7ad8-4fc2-accc-66827f11743d/volumes/kubernetes.io~projected/kube-api-access-8q8jf DeviceMajor:0 DeviceMinor:1044 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2102e834-2b36-49de-a99e-c2dbe64d722f/volumes/kubernetes.io~projected/kube-api-access-hq2mb DeviceMajor:0 DeviceMinor:989 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-703 DeviceMajor:0 DeviceMinor:703 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1226 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/volumes/kubernetes.io~projected/kube-api-access-wn8df DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:946 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c435347a-ac01-46af-8192-9ef2d632bdfb/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1227 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22a30079-d7fc-49cf-882e-1c5022cb5bf6/volumes/kubernetes.io~projected/kube-api-access-bh874 DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/071566ae-a9ae-4aa9-9dc3-38602363be72/volumes/kubernetes.io~projected/kube-api-access-hrh2k DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:915 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-544 DeviceMajor:0 DeviceMinor:544 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-886 DeviceMajor:0 DeviceMinor:886 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1297 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:592 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46b63befb37c207e59dcc8df42c0e9e3530c0f2f24f79765bda06ad35b9b950d/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes/kubernetes.io~projected/kube-api-access-wjb95 DeviceMajor:0 DeviceMinor:1102 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b58e9d93-7683-440d-a603-9543e5455490/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:947 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0026d8b6e87a23d662a3c94357c0b35295466aca75ebd69cf4fb6b87a87fe76/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-819 DeviceMajor:0 DeviceMinor:819 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-872 DeviceMajor:0 DeviceMinor:872 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-556 DeviceMajor:0 DeviceMinor:556 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1242 DeviceMajor:0 DeviceMinor:1242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1304 DeviceMajor:0 DeviceMinor:1304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-986 DeviceMajor:0 DeviceMinor:986 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/volumes/kubernetes.io~projected/kube-api-access-7nzlr DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc1acede92d3904b085d891408e47b6331ba105ca16c08deba24871e1ded582f/userdata/shm DeviceMajor:0 DeviceMinor:411 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-564 DeviceMajor:0 DeviceMinor:564 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/801742a6-3735-4883-9676-e852dc4173d2/volumes/kubernetes.io~projected/kube-api-access-qxqt4 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cba6e963b84ef59c8499695b7e9c3fc6bfc32f8754ee29ed5aa61fc3c50b955c/userdata/shm DeviceMajor:0 DeviceMinor:917 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/553d4535-9985-47e2-83ee-8fcfb6035e7b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79cd9922eddeda66f86396279d7c2d92bdfdde5d55f7ab9b86712ce128d7d382/userdata/shm DeviceMajor:0 DeviceMinor:1103 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1105 DeviceMajor:0 DeviceMinor:1105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d481a79-f565-4c7f-84cc-207fc3117c23/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:438 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/08e27254-e906-484a-b346-036f898be3ae/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:470 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~projected/kube-api-access-nwptc DeviceMajor:0 DeviceMinor:676 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0592ebe07bf5febe5898e5f99574d61161c0cfa6ea6743adf0c7c030853141ad/userdata/shm DeviceMajor:0 DeviceMinor:959 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6d23570-21d6-4b08-83fc-8b0827c25313/volumes/kubernetes.io~projected/kube-api-access-czt92 DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02/userdata/shm DeviceMajor:0 DeviceMinor:365 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/afa3f59e2bc7466bd1b06c51e7ed2d9d6a3926c00535b006d8f4a5730c12a974/userdata/shm DeviceMajor:0 DeviceMinor:1134 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b4422676-9a70-4973-8299-7b40a66e9c96/volumes/kubernetes.io~projected/kube-api-access-27gfx DeviceMajor:0 DeviceMinor:900 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:492 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-560 DeviceMajor:0 DeviceMinor:560 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~projected/kube-api-access-jg8h7 DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9a0cb53cadb3321345d154cf27268733399d5b983fe25d9e3ac83b00fa3506d/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e92e0041b6c4bdb12ce4e7a526a8155669347c6f7534daf537c2b7896eac3825/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-938 DeviceMajor:0 DeviceMinor:938 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:1157 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1198 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-906 DeviceMajor:0 DeviceMinor:906 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/784b804f-6bcf-4cbd-a19e-9b1fa244354e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1072 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3de92b39f5eed6fb2072489b003ac88b141cc4450863a8a84bd84754c9097e8a/userdata/shm DeviceMajor:0 DeviceMinor:420 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fc76384d-b288-4d30-bc77-f696b62a5f30/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:493 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c066e0aa98f24b311ae58142339472cef6d647c5cb0ec12d82196966a66f6bc2/userdata/shm DeviceMajor:0 DeviceMinor:990 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a00011bbe3917f68bb68f28876dff59eea7dbd62d26bc18f5f5ed40cb1d0b447/userdata/shm DeviceMajor:0 DeviceMinor:1024 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1127 DeviceMajor:0 DeviceMinor:1127 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/257db04b-7203-4a1d-b3d4-bd4db258a3cc/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/632fa4c3-b717-432c-8c5f-8d809f69c48b/volumes/kubernetes.io~projected/kube-api-access-8bpwm DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a681cbc579a95de476c193412db5500c7b6a259702d2ab059c0ee35c97e7da06/userdata/shm DeviceMajor:0 DeviceMinor:496 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/14723cb7-2d96-42b7-b559-70386c4c841c/volumes/kubernetes.io~projected/kube-api-access-7lw7x DeviceMajor:0 DeviceMinor:958 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd2c79d-1e10-4f09-8a33-c66598abc99a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:67 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-689 DeviceMajor:0 DeviceMinor:689 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad81b5bd-2f97-4e7e-a12b-746998fa59f2/volumes/kubernetes.io~projected/kube-api-access-9t5jv DeviceMajor:0 DeviceMinor:925 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1059 DeviceMajor:0 DeviceMinor:1059 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/722d47350d1c81810576142df11eff4e518dcde59f93678f428ad5eb7002bb4a/userdata/shm DeviceMajor:0 DeviceMinor:521 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1356 DeviceMajor:0 DeviceMinor:1356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1358 DeviceMajor:0 DeviceMinor:1358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a7917f93b759157396676df5270d9f55ac3fb5ce7081908f3a79c2dd1fbffdd/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63333766efa7717806a0ceafcfe5e910596ee1f9959715b67862349cd0661743/userdata/shm DeviceMajor:0 DeviceMinor:1047 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c97d328c-95b6-4511-aa90-531ab42b9653/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:1115 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:490 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/626c4f7a-59ee-45da-9198-05dd2c42ac42/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:881 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1000 DeviceMajor:0 DeviceMinor:1000 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1225 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/124ba199-b79a-4e5c-8512-cc0ae50f73c8/volumes/kubernetes.io~projected/kube-api-access-dmp42 DeviceMajor:0 DeviceMinor:594 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes/kubernetes.io~projected/kube-api-access-xrg27 DeviceMajor:0 DeviceMinor:473 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:914 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1070 DeviceMajor:0 DeviceMinor:1070 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/798daf69301c189b976c0bf567e715514f72cff14e7ac9ab6e91e0049055219a/userdata/shm DeviceMajor:0 DeviceMinor:307 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/88069f4ccbdf201c4be62b11d0e703527a7a79f09f40906dc3a787d78261c8ef/userdata/shm DeviceMajor:0 DeviceMinor:500 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5029165f3acbba6c500e380aa4ddf091a7ab8015a5fcfab4cef7dd1e1f0cbff/userdata/shm DeviceMajor:0 DeviceMinor:1266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1083 DeviceMajor:0 DeviceMinor:1083 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76d3da23-3347-4a5c-b328-d92671897ecc/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:1205 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f38747bdec24188d4ffe8cfb159d9a08ab099ae4fe10c6fb530c6bc6745fe0f/userdata/shm DeviceMajor:0 DeviceMinor:1238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f565a312b6fdba1e4420f7c51d0c06303db46761e8bdf7c0064ba897805dc24a/userdata/shm DeviceMajor:0 DeviceMinor:644 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1061 DeviceMajor:0 DeviceMinor:1061 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~projected/kube-api-access-tk6jm DeviceMajor:0 DeviceMinor:1075 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1189 DeviceMajor:0 DeviceMinor:1189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/833c8661-28ca-463a-ac61-6edb961056e3/volumes/kubernetes.io~projected/kube-api-access-2ghlk DeviceMajor:0 DeviceMinor:640 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cdbde712-c8dd-4011-adcb-af895abce94c/volumes/kubernetes.io~projected/kube-api-access-9fj8w DeviceMajor:0 DeviceMinor:1230 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ba1306f7-029b-4d43-ba3c-5738da9148d6/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1017 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:905 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1109 DeviceMajor:0 DeviceMinor:1109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7bc3eacfb0cf92ff3aa201ca8580ef11806f506d319e9d528672f5e695d8979/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d317dcb-ea6a-4066-b197-5ee960dec01a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:761 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-694 DeviceMajor:0 DeviceMinor:694 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:802 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9768ef3d-4f12-4303-98cb-56f8ebe05039/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1073 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2b167b7b-2280-4c82-ac78-71c57aebe503/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/906174604cb39234c29ce4879ec0f4d93014bdd017a01d3e85d6c19518222596/userdata/shm DeviceMajor:0 DeviceMinor:507 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-685 DeviceMajor:0 DeviceMinor:685 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cbf31d43472a3a7627226214b8578cd050b8394e6c44d935043c903b69b9fb9/userdata/shm DeviceMajor:0 DeviceMinor:1053 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1275 DeviceMajor:0 DeviceMinor:1275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:729 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b2b7830-6ee0-4d87-a57b-dc668de4b39a/volumes/kubernetes.io~projected/kube-api-access-pnhjw DeviceMajor:0 DeviceMinor:684 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7307f70e-ee5b-4f81-8155-718a02c9efe7/volumes/kubernetes.io~projected/kube-api-access-dzrmf DeviceMajor:0 DeviceMinor:916 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/655e4000-0ad4-4349-8c31-e0c952e4be30/volumes/kubernetes.io~projected/kube-api-access-qf69t DeviceMajor:0 DeviceMinor:975 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1314 DeviceMajor:0 DeviceMinor:1314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c78b15cceeb9a13c85a4191822de34b4c848b664ef3622c58cc74eb63d4ebbb5/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-674 DeviceMajor:0 DeviceMinor:674 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-745 DeviceMajor:0 DeviceMinor:745 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:026610117c01997 MacAddress:da:23:3f:a8:e1:e1 Speed:10000 Mtu:8900} {Name:0760e00b9323630 MacAddress:66:45:cd:ac:4d:b6 Speed:10000 Mtu:8900} {Name:086a5a64a12e376 MacAddress:1a:0e:95:04:61:a3 Speed:10000 Mtu:8900} {Name:086d9bb4b9a7ac8 MacAddress:4a:05:ad:99:74:05 Speed:10000 Mtu:8900} {Name:0dd6efeec5aa4e3 MacAddress:b2:7d:3e:d8:77:01 Speed:10000 Mtu:8900} {Name:16817c879758d5d MacAddress:32:07:26:5d:be:ca Speed:10000 Mtu:8900} {Name:1698c2cc5bd5ca4 MacAddress:3a:14:b3:2c:c6:ca Speed:10000 Mtu:8900} {Name:260124ead6b34d5 MacAddress:a6:4c:67:c6:a1:82 Speed:10000 Mtu:8900} {Name:298673e77b46ac4 MacAddress:36:dc:16:e0:69:12 Speed:10000 Mtu:8900} {Name:2f085db99c3eb79 MacAddress:de:17:45:1e:8e:ed Speed:10000 Mtu:8900} {Name:2f38747bdec2418 MacAddress:1e:af:2b:d5:1a:07 Speed:10000 Mtu:8900} {Name:31de4b8284b14c5 MacAddress:9e:03:56:60:45:d6 Speed:10000 Mtu:8900} {Name:3de92b39f5eed6f MacAddress:96:81:ee:46:97:b9 Speed:10000 Mtu:8900} {Name:46b63befb37c207 MacAddress:96:1b:02:ed:f7:cb Speed:10000 Mtu:8900} {Name:4ae9c7ad8143a0b MacAddress:66:8d:90:42:e9:61 Speed:10000 Mtu:8900} {Name:509218f044076ea MacAddress:7a:2e:9c:fc:ab:87 Speed:10000 Mtu:8900} {Name:564e010b4acb371 MacAddress:8e:55:55:80:58:30 Speed:10000 Mtu:8900} {Name:57edd3b523cd1b8 MacAddress:2a:49:1e:be:9b:bf Speed:10000 Mtu:8900} {Name:5864628e0f7acbb MacAddress:1a:bb:6b:c3:8f:bd Speed:10000 Mtu:8900} {Name:5922fb8c007ad59 MacAddress:72:77:d8:c6:2e:9f Speed:10000 Mtu:8900} {Name:68f6c5cb6453d46 MacAddress:ae:e4:2f:88:66:d9 Speed:10000 Mtu:8900} {Name:6968fe4893506f2 MacAddress:36:1a:a7:4e:29:47 Speed:10000 Mtu:8900} {Name:722d47350d1c818 MacAddress:16:b8:16:b1:c3:9c Speed:10000 Mtu:8900} {Name:798daf69301c189 MacAddress:aa:29:f4:9b:e1:bb Speed:10000 Mtu:8900} {Name:79cd9922eddeda6 MacAddress:0a:e4:fa:e4:a8:48 Speed:10000 Mtu:8900} {Name:7a489b2f48772d8 MacAddress:56:4b:58:bf:9d:5f Speed:10000 Mtu:8900} {Name:7cbf31d43472a3a MacAddress:e2:3e:78:64:4c:4f Speed:10000 Mtu:8900} {Name:80a35c92c437f32 MacAddress:26:7f:aa:92:38:f8 Speed:10000 Mtu:8900} {Name:82a4950a547d0a5 MacAddress:8a:ce:0c:2c:8a:24 Speed:10000 Mtu:8900} {Name:88069f4ccbdf201 MacAddress:46:ae:f0:17:00:3c Speed:10000 Mtu:8900} {Name:90185a33c582493 MacAddress:5e:8f:e7:15:b5:e5 Speed:10000 Mtu:8900} {Name:906174604cb3923 MacAddress:26:08:95:da:f0:d5 Speed:10000 Mtu:8900} {Name:93996d5f48081a9 MacAddress:82:48:e4:e4:72:29 Speed:10000 Mtu:8900} {Name:a00011bbe3917f6 MacAddress:9e:ed:db:b2:8e:88 Speed:10000 Mtu:8900} {Name:a3a77a00a966d03 MacAddress:2a:04:f4:12:3d:a8 Speed:10000 Mtu:8900} {Name:a576c816a4856d1 MacAddress:22:17:50:38:06:64 Speed:10000 Mtu:8900} {Name:a592584f1d491ed MacAddress:66:b2:03:f7:42:0a Speed:10000 Mtu:8900} {Name:a681cbc579a95de MacAddress:36:fe:f4:92:88:a9 Speed:10000 Mtu:8900} {Name:ac3405a44e64442 MacAddress:6a:7f:a2:d7:31:26 Speed:10000 Mtu:8900} {Name:af54fa9c62b28e6 MacAddress:ba:8c:40:67:fe:c3 Speed:10000 Mtu:8900} {Name:afa3f59e2bc7466 MacAddress:fe:57:77:53:f2:ec Speed:10000 Mtu:8900} {Name:b52356412bf9fd6 MacAddress:de:ee:89:35:88:30 Speed:10000 Mtu:8900} {Name:b616967df2f9b98 MacAddress:fa:4e:09:d6:ee:6e Speed:10000 Mtu:8900} {Name:b654a908d6c1613 MacAddress:6e:3d:a4:0e:b6:80 Speed:10000 Mtu:8900} {Name:b65552bcab35fe1 MacAddress:fe:d4:e4:4e:f4:5b Speed:10000 Mtu:8900} {Name:b7039f4f79e0da9 MacAddress:a6:31:60:05:6a:83 Speed:10000 Mtu:8900} {Name:bc1acede92d3904 MacAddress:62:58:2c:62:77:4e Speed:10000 Mtu:8900} {Name:bef471f18c3a5fc MacAddress:7a:b7:fd:1a:e4:66 Speed:10000 Mtu:8900} {Name:bf4ca08876e89c1 MacAddress:1a:c3:79:ff:37:3c Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7e:ca:dd:4c:57:66 Speed:0 Mtu:8900} {Name:c15c55254b60eef MacAddress:66:1e:55:c7:54:22 Speed:10000 Mtu:8900} {Name:c5029165f3acbba MacAddress:46:f5:0e:21:d4:c6 Speed:10000 Mtu:8900} {Name:c73742e20a24cd4 MacAddress:a6:9b:b0:78:5d:89 Speed:10000 Mtu:8900} {Name:c78b15cceeb9a13 MacAddress:62:3c:02:fa:20:d2 Speed:10000 Mtu:8900} {Name:c9a0cb53cadb332 MacAddress:e6:d7:3d:0e:ab:2a Speed:10000 Mtu:8900} {Name:cba6e963b84ef59 MacAddress:26:e0:75:90:66:14 Speed:10000 Mtu:8900} {Name:e92e0041b6c4bdb MacAddress:6e:33:8c:8c:45:31 Speed:10000 Mtu:8900} {Name:ec0152f98764cdb MacAddress:82:e3:9e:d4:77:ae Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:79:b8:2d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:97:d0:9b Speed:-1 Mtu:9000} {Name:f3cfbf80866e1ff MacAddress:72:d4:5a:fa:fa:a2 Speed:10000 Mtu:8900} {Name:f565a312b6fdba1 MacAddress:72:c4:63:a4:1a:74 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:fa:aa:43:f9:eb:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:15:38.290257 master-0 kubenswrapper[26425]: I0217 15:15:38.289321 26425 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:15:38.290257 master-0 kubenswrapper[26425]: I0217 15:15:38.289498 26425 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:15:38.291066 master-0 kubenswrapper[26425]: I0217 15:15:38.290809 26425 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:15:38.292183 master-0 kubenswrapper[26425]: I0217 15:15:38.291258 26425 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:15:38.292547 master-0 kubenswrapper[26425]: I0217 15:15:38.292186 26425 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292577 26425 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292598 26425 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292614 26425 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292655 26425 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292733 26425 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.292916 26425 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.293009 26425 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.293029 26425 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:15:38.293513 master-0 kubenswrapper[26425]: I0217 15:15:38.293053 26425 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:15:38.294544 master-0 kubenswrapper[26425]: I0217 15:15:38.294490 26425 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:15:38.294544 master-0 kubenswrapper[26425]: I0217 15:15:38.294530 26425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:15:38.297046 master-0 kubenswrapper[26425]: I0217 15:15:38.296828 26425 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 17 15:15:38.297046 master-0 kubenswrapper[26425]: I0217 15:15:38.297029 26425 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 15:15:38.297438 master-0 kubenswrapper[26425]: I0217 15:15:38.297363 26425 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297512 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297530 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297537 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297545 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297551 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297558 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297566 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297572 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297579 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297586 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297595 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:15:38.297582 master-0 kubenswrapper[26425]: I0217 15:15:38.297608 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:15:38.299068 master-0 kubenswrapper[26425]: I0217 15:15:38.297648 26425 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:15:38.299068 master-0 kubenswrapper[26425]: I0217 15:15:38.298001 26425 server.go:1280] "Started kubelet" Feb 17 15:15:38.299068 master-0 kubenswrapper[26425]: I0217 15:15:38.298297 26425 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:15:38.298823 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 17 15:15:38.299962 master-0 kubenswrapper[26425]: I0217 15:15:38.298445 26425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:15:38.299962 master-0 kubenswrapper[26425]: I0217 15:15:38.299680 26425 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 17 15:15:38.300882 master-0 kubenswrapper[26425]: I0217 15:15:38.300763 26425 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:15:38.302141 master-0 kubenswrapper[26425]: I0217 15:15:38.302085 26425 server.go:449] "Adding debug handlers to kubelet server" Feb 17 15:15:38.317046 master-0 kubenswrapper[26425]: I0217 15:15:38.316067 26425 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:15:38.318635 master-0 kubenswrapper[26425]: I0217 15:15:38.318596 26425 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:15:38.324655 master-0 kubenswrapper[26425]: E0217 15:15:38.324538 26425 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 17 15:15:38.338799 master-0 kubenswrapper[26425]: I0217 15:15:38.338668 26425 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:15:38.338799 master-0 kubenswrapper[26425]: I0217 15:15:38.338789 26425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.339130 26425 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-18 14:51:47 +0000 UTC, rotation deadline is 2026-02-18 12:14:02.508996905 +0000 UTC Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.339230 26425 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h58m24.169773625s for next certificate rotation Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.339615 26425 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.339642 26425 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.339844 26425 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.342323 26425 factory.go:55] Registering systemd factory Feb 17 15:15:38.343506 master-0 kubenswrapper[26425]: I0217 15:15:38.342356 26425 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:15:38.344907 master-0 kubenswrapper[26425]: I0217 15:15:38.344861 26425 factory.go:153] Registering CRI-O factory Feb 17 15:15:38.345100 master-0 kubenswrapper[26425]: I0217 15:15:38.345075 26425 factory.go:221] Registration of the crio container factory successfully Feb 17 15:15:38.345656 master-0 kubenswrapper[26425]: I0217 15:15:38.345604 26425 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:15:38.347537 master-0 kubenswrapper[26425]: I0217 15:15:38.346108 26425 factory.go:103] Registering Raw factory Feb 17 15:15:38.347537 master-0 kubenswrapper[26425]: I0217 15:15:38.346211 26425 manager.go:1196] Started watching for new ooms in manager Feb 17 15:15:38.347537 master-0 kubenswrapper[26425]: I0217 15:15:38.347039 26425 manager.go:319] Starting recovery of all containers Feb 17 15:15:38.348379 master-0 kubenswrapper[26425]: I0217 15:15:38.348337 26425 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:15:38.368121 master-0 kubenswrapper[26425]: I0217 15:15:38.367981 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50c51fe2-32aa-430f-8da0-7cf3b9519131" volumeName="kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache" seLinuxMountContext="" Feb 17 15:15:38.368121 master-0 kubenswrapper[26425]: I0217 15:15:38.368117 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68954d1e-2147-4465-9817-a3c04cbc19b0" volumeName="kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368153 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6d23570-21d6-4b08-83fc-8b0827c25313" volumeName="kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368181 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9579e-7383-421e-95dd-8f8b786817f9" volumeName="kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368206 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b2b7830-6ee0-4d87-a57b-dc668de4b39a" volumeName="kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368231 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257db04b-7203-4a1d-b3d4-bd4db258a3cc" volumeName="kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368256 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b7d1adb-b23b-4702-be7d-27e818e8fd63" volumeName="kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368282 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368311 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368342 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d317dcb-ea6a-4066-b197-5ee960dec01a" volumeName="kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368372 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" volumeName="kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key" seLinuxMountContext="" Feb 17 15:15:38.368385 master-0 kubenswrapper[26425]: I0217 15:15:38.368396 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba1306f7-029b-4d43-ba3c-5738da9148d6" volumeName="kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368424 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368513 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c97d328c-95b6-4511-aa90-531ab42b9653" volumeName="kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368543 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368567 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368593 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368623 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368689 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75486ba2-6fde-456f-8846-2af67e58d585" volumeName="kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368716 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368742 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8646e5c-c2ce-48e6-b757-58044769f479" volumeName="kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368771 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368835 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14723cb7-2d96-42b7-b559-70386c4c841c" volumeName="kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368864 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14723cb7-2d96-42b7-b559-70386c4c841c" volumeName="kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368890 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="129dba1e-73df-4ea4-96c0-3eba78d568ba" volumeName="kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368918 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14723cb7-2d96-42b7-b559-70386c4c841c" volumeName="kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368951 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2102e834-2b36-49de-a99e-c2dbe64d722f" volumeName="kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb" seLinuxMountContext="" Feb 17 15:15:38.368979 master-0 kubenswrapper[26425]: I0217 15:15:38.368979 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8385a176-0e12-47ef-862e-8331e6734b9c" volumeName="kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369008 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3b6a099-f52a-428a-af09-d1842ce66891" volumeName="kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369035 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369160 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369188 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e27254-e906-484a-b346-036f898be3ae" volumeName="kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369212 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" volumeName="kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369235 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b58e9d93-7683-440d-a603-9543e5455490" volumeName="kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369264 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" volumeName="kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369290 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b7d1adb-b23b-4702-be7d-27e818e8fd63" volumeName="kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369317 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d3da23-3347-4a5c-b328-d92671897ecc" volumeName="kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369344 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa267e55-eef2-447f-b2ff-57c1ec2930be" volumeName="kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369372 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d973c9bc-8097-489c-9b8b-70b775177c41" volumeName="kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369395 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369443 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2102e834-2b36-49de-a99e-c2dbe64d722f" volumeName="kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369547 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369576 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369603 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369629 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369654 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b2b7830-6ee0-4d87-a57b-dc668de4b39a" volumeName="kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369690 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369738 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784b804f-6bcf-4cbd-a19e-9b1fa244354e" volumeName="kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369771 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2d6e329-7ad8-4fc2-accc-66827f11743d" volumeName="kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369802 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369841 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369876 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="632fa4c3-b717-432c-8c5f-8d809f69c48b" volumeName="kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369933 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.369980 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8646e5c-c2ce-48e6-b757-58044769f479" volumeName="kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2" seLinuxMountContext="" Feb 17 15:15:38.369980 master-0 kubenswrapper[26425]: I0217 15:15:38.370025 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370072 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370118 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370163 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7307f70e-ee5b-4f81-8155-718a02c9efe7" volumeName="kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370203 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370228 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdbde712-c8dd-4011-adcb-af895abce94c" volumeName="kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370348 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d075439c-721d-432b-b4f9-9f078132bf92" volumeName="kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370390 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370426 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="655e4000-0ad4-4349-8c31-e0c952e4be30" volumeName="kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370521 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370575 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2d6e329-7ad8-4fc2-accc-66827f11743d" volumeName="kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370605 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" volumeName="kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370667 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4422676-9a70-4973-8299-7b40a66e9c96" volumeName="kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370698 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da06cfcb-7c78-4022-96b1-d858853f5adc" volumeName="kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370724 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e27254-e906-484a-b346-036f898be3ae" volumeName="kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370757 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370797 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370823 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370879 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370907 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370949 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370973 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e43034-56d0-4fb2-8886-deb00b625686" volumeName="kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.370997 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" volumeName="kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371024 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52b28595-f0fc-49e2-9c95-43e5f1eb003f" volumeName="kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371057 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="833c8661-28ca-463a-ac61-6edb961056e3" volumeName="kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371091 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad81b5bd-2f97-4e7e-a12b-746998fa59f2" volumeName="kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371210 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf74b8c3-a5a6-4fb9-9d12-3a47c759f699" volumeName="kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371241 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371266 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371290 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371313 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371336 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc216ba1-144a-4cc8-93db-85ab558a166a" volumeName="kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371362 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371385 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdbde712-c8dd-4011-adcb-af895abce94c" volumeName="kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371410 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371432 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c97d328c-95b6-4511-aa90-531ab42b9653" volumeName="kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371481 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371523 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="632fa4c3-b717-432c-8c5f-8d809f69c48b" volumeName="kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371554 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" volumeName="kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371589 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371615 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2102e834-2b36-49de-a99e-c2dbe64d722f" volumeName="kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371638 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="727f20b6-19c7-45eb-a803-6898ecaeffd0" volumeName="kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371670 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784b804f-6bcf-4cbd-a19e-9b1fa244354e" volumeName="kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371692 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371727 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371770 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="655e4000-0ad4-4349-8c31-e0c952e4be30" volumeName="kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371798 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371830 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 15:15:38.371807 master-0 kubenswrapper[26425]: I0217 15:15:38.371855 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6d23570-21d6-4b08-83fc-8b0827c25313" volumeName="kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.371890 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" volumeName="kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.371944 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fce9579e-7383-421e-95dd-8f8b786817f9" volumeName="kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.371972 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372003 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf74b8c3-a5a6-4fb9-9d12-3a47c759f699" volumeName="kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372042 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d3da23-3347-4a5c-b328-d92671897ecc" volumeName="kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372072 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" volumeName="kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372099 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d90bf3-02df-48c8-b2ec-09a1653b0800" volumeName="kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372123 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="626c4f7a-59ee-45da-9198-05dd2c42ac42" volumeName="kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372151 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2d6e329-7ad8-4fc2-accc-66827f11743d" volumeName="kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372177 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da06cfcb-7c78-4022-96b1-d858853f5adc" volumeName="kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372222 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372265 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372291 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7307f70e-ee5b-4f81-8155-718a02c9efe7" volumeName="kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372315 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784b804f-6bcf-4cbd-a19e-9b1fa244354e" volumeName="kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372340 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372364 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372386 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372411 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372498 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372591 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9768ef3d-4f12-4303-98cb-56f8ebe05039" volumeName="kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372622 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372669 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14723cb7-2d96-42b7-b559-70386c4c841c" volumeName="kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372703 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3db03cef-d297-4bf7-8e52-dd0b18882d07" volumeName="kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372729 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" volumeName="kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372755 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372781 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372825 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c435347a-ac01-46af-8192-9ef2d632bdfb" volumeName="kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372864 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c435347a-ac01-46af-8192-9ef2d632bdfb" volumeName="kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372890 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68954d1e-2147-4465-9817-a3c04cbc19b0" volumeName="kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372913 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d317dcb-ea6a-4066-b197-5ee960dec01a" volumeName="kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372947 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc76384d-b288-4d30-bc77-f696b62a5f30" volumeName="kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.372982 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.373016 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.373039 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.373160 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf74b8c3-a5a6-4fb9-9d12-3a47c759f699" volumeName="kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374740 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" volumeName="kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374860 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374903 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" volumeName="kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374920 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da06cfcb-7c78-4022-96b1-d858853f5adc" volumeName="kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374946 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d481a79-f565-4c7f-84cc-207fc3117c23" volumeName="kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374964 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.374981 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib" seLinuxMountContext="" Feb 17 15:15:38.374959 master-0 kubenswrapper[26425]: I0217 15:15:38.375008 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="626c4f7a-59ee-45da-9198-05dd2c42ac42" volumeName="kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375022 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65d9f008-7777-48fe-85fe-9d54a7bbcea9" volumeName="kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375136 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68954d1e-2147-4465-9817-a3c04cbc19b0" volumeName="kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375165 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad81b5bd-2f97-4e7e-a12b-746998fa59f2" volumeName="kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375188 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b58e9d93-7683-440d-a603-9543e5455490" volumeName="kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375494 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08e27254-e906-484a-b346-036f898be3ae" volumeName="kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375574 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375591 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375615 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba1306f7-029b-4d43-ba3c-5738da9148d6" volumeName="kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375630 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c435347a-ac01-46af-8192-9ef2d632bdfb" volumeName="kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375755 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c97d328c-95b6-4511-aa90-531ab42b9653" volumeName="kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375785 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" volumeName="kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375806 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a30079-d7fc-49cf-882e-1c5022cb5bf6" volumeName="kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375831 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7307f70e-ee5b-4f81-8155-718a02c9efe7" volumeName="kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.375907 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da06cfcb-7c78-4022-96b1-d858853f5adc" volumeName="kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376000 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b2b7830-6ee0-4d87-a57b-dc668de4b39a" volumeName="kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376019 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376035 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d56f334-6c7b-4c92-9665-56300d44f9a3" volumeName="kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376084 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2d6e329-7ad8-4fc2-accc-66827f11743d" volumeName="kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376099 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="553d4535-9985-47e2-83ee-8fcfb6035e7b" volumeName="kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376118 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7307f70e-ee5b-4f81-8155-718a02c9efe7" volumeName="kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376189 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75486ba2-6fde-456f-8846-2af67e58d585" volumeName="kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376205 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="833c8661-28ca-463a-ac61-6edb961056e3" volumeName="kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376225 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc76384d-b288-4d30-bc77-f696b62a5f30" volumeName="kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376272 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="655e4000-0ad4-4349-8c31-e0c952e4be30" volumeName="kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376292 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c435347a-ac01-46af-8192-9ef2d632bdfb" volumeName="kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376424 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb94b2b6-21a9-41bb-b822-9406a3ebb1e9" volumeName="kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376441 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257db04b-7203-4a1d-b3d4-bd4db258a3cc" volumeName="kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376477 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376540 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376555 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9768ef3d-4f12-4303-98cb-56f8ebe05039" volumeName="kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376632 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba1306f7-029b-4d43-ba3c-5738da9148d6" volumeName="kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376647 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376729 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" volumeName="kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376771 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376788 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376808 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" volumeName="kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376822 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc216ba1-144a-4cc8-93db-85ab558a166a" volumeName="kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376903 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc216ba1-144a-4cc8-93db-85ab558a166a" volumeName="kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376926 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50c51fe2-32aa-430f-8da0-7cf3b9519131" volumeName="kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376945 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd2c79d-1e10-4f09-8a33-c66598abc99a" volumeName="kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.376969 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377051 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377163 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377209 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b58e9d93-7683-440d-a603-9543e5455490" volumeName="kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377222 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377241 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3db03cef-d297-4bf7-8e52-dd0b18882d07" volumeName="kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377256 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3db03cef-d297-4bf7-8e52-dd0b18882d07" volumeName="kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377306 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68954d1e-2147-4465-9817-a3c04cbc19b0" volumeName="kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377320 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d3da23-3347-4a5c-b328-d92671897ecc" volumeName="kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377359 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8385a176-0e12-47ef-862e-8331e6734b9c" volumeName="kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377376 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8385a176-0e12-47ef-862e-8331e6734b9c" volumeName="kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377389 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c435347a-ac01-46af-8192-9ef2d632bdfb" volumeName="kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377406 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3db03cef-d297-4bf7-8e52-dd0b18882d07" volumeName="kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377421 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377434 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8646e5c-c2ce-48e6-b757-58044769f479" volumeName="kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377508 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b167b7b-2280-4c82-ac78-71c57aebe503" volumeName="kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377522 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c734c89-515e-4ff0-82d1-831ddaf0b99e" volumeName="kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377591 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8385a176-0e12-47ef-862e-8331e6734b9c" volumeName="kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377667 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a905fb6-17d4-413b-9107-859c804ce906" volumeName="kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377681 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2d6e329-7ad8-4fc2-accc-66827f11743d" volumeName="kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377757 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdbde712-c8dd-4011-adcb-af895abce94c" volumeName="kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w" seLinuxMountContext="" Feb 17 15:15:38.377734 master-0 kubenswrapper[26425]: I0217 15:15:38.377776 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" volumeName="kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.377870 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.377972 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="833c8661-28ca-463a-ac61-6edb961056e3" volumeName="kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378007 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="626c4f7a-59ee-45da-9198-05dd2c42ac42" volumeName="kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378032 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="655e4000-0ad4-4349-8c31-e0c952e4be30" volumeName="kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378050 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7307f70e-ee5b-4f81-8155-718a02c9efe7" volumeName="kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378079 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8385a176-0e12-47ef-862e-8331e6734b9c" volumeName="kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378108 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4422676-9a70-4973-8299-7b40a66e9c96" volumeName="kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378125 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378148 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50c51fe2-32aa-430f-8da0-7cf3b9519131" volumeName="kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378172 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd2c79d-1e10-4f09-8a33-c66598abc99a" volumeName="kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378188 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378214 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378354 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6d23570-21d6-4b08-83fc-8b0827c25313" volumeName="kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378426 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" volumeName="kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378482 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" volumeName="kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378524 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784b804f-6bcf-4cbd-a19e-9b1fa244354e" volumeName="kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378550 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" volumeName="kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378588 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d3da23-3347-4a5c-b328-d92671897ecc" volumeName="kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378630 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378677 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378712 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d56f334-6c7b-4c92-9665-56300d44f9a3" volumeName="kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378735 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af61bda0-c7b4-489d-a671-eaa5299942fe" volumeName="kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378764 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e259b5a1-837b-4cde-85f7-cd5781af08bd" volumeName="kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378786 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" volumeName="kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378814 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378838 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257db04b-7203-4a1d-b3d4-bd4db258a3cc" volumeName="kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378858 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378885 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c393109-8c98-4a73-be1a-608038e5d094" volumeName="kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378906 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" volumeName="kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378956 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b58e9d93-7683-440d-a603-9543e5455490" volumeName="kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.378983 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379002 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="124ba199-b79a-4e5c-8512-cc0ae50f73c8" volumeName="kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379051 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="801742a6-3735-4883-9676-e852dc4173d2" volumeName="kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379077 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d317dcb-ea6a-4066-b197-5ee960dec01a" volumeName="kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379099 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e31afc-79d5-46f4-9835-0fd11da9465f" volumeName="kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379150 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdbde712-c8dd-4011-adcb-af895abce94c" volumeName="kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379185 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb153362-0abb-4aad-8975-532f6e72d032" volumeName="kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379222 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" volumeName="kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379246 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8379aee6-f810-4e5f-b209-8f6cb5f87df0" volumeName="kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379265 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d97ff4f-48eb-4d9f-9d60-3e09f0bde040" volumeName="kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379330 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="071566ae-a9ae-4aa9-9dc3-38602363be72" volumeName="kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379374 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9768ef3d-4f12-4303-98cb-56f8ebe05039" volumeName="kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379416 26425 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="187af679-a062-4f41-81f2-33545f76febf" volumeName="kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token" seLinuxMountContext="" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379444 26425 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.379481 26425 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:15:38.386133 master-0 kubenswrapper[26425]: I0217 15:15:38.385573 26425 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:15:38.391896 master-0 kubenswrapper[26425]: I0217 15:15:38.391813 26425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:15:38.393732 master-0 kubenswrapper[26425]: I0217 15:15:38.393691 26425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:15:38.393804 master-0 kubenswrapper[26425]: I0217 15:15:38.393759 26425 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:15:38.393804 master-0 kubenswrapper[26425]: I0217 15:15:38.393794 26425 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:15:38.393982 master-0 kubenswrapper[26425]: E0217 15:15:38.393870 26425 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:15:38.395506 master-0 kubenswrapper[26425]: I0217 15:15:38.395441 26425 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:15:38.403962 master-0 kubenswrapper[26425]: I0217 15:15:38.403906 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/3.log" Feb 17 15:15:38.404615 master-0 kubenswrapper[26425]: I0217 15:15:38.404533 26425 generic.go:334] "Generic (PLEG): container finished" podID="22a30079-d7fc-49cf-882e-1c5022cb5bf6" containerID="e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23" exitCode=1 Feb 17 15:15:38.417659 master-0 kubenswrapper[26425]: I0217 15:15:38.417597 26425 generic.go:334] "Generic (PLEG): container finished" podID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerID="860736c555e36eb357d7747028619f7c30730d9978a45e3a5c0a43cdd4bd9ba8" exitCode=0 Feb 17 15:15:38.417659 master-0 kubenswrapper[26425]: I0217 15:15:38.417641 26425 generic.go:334] "Generic (PLEG): container finished" podID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerID="fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" exitCode=0 Feb 17 15:15:38.419979 master-0 kubenswrapper[26425]: I0217 15:15:38.419944 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/0.log" Feb 17 15:15:38.420487 master-0 kubenswrapper[26425]: I0217 15:15:38.420418 26425 generic.go:334] "Generic (PLEG): container finished" podID="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" containerID="55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902" exitCode=1 Feb 17 15:15:38.426645 master-0 kubenswrapper[26425]: I0217 15:15:38.426579 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:15:38.427524 master-0 kubenswrapper[26425]: I0217 15:15:38.427415 26425 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" exitCode=1 Feb 17 15:15:38.429211 master-0 kubenswrapper[26425]: I0217 15:15:38.429153 26425 generic.go:334] "Generic (PLEG): container finished" podID="2a162205-f111-49b4-9f46-0b40b6184336" containerID="1e7b4529083cffeef5003957eb03a7afcc09cde5e715114a3708977a54e19b17" exitCode=0 Feb 17 15:15:38.431340 master-0 kubenswrapper[26425]: I0217 15:15:38.431282 26425 generic.go:334] "Generic (PLEG): container finished" podID="801742a6-3735-4883-9676-e852dc4173d2" containerID="acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60" exitCode=0 Feb 17 15:15:38.434237 master-0 kubenswrapper[26425]: I0217 15:15:38.434165 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821" exitCode=0 Feb 17 15:15:38.445938 master-0 kubenswrapper[26425]: I0217 15:15:38.445708 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:15:38.445938 master-0 kubenswrapper[26425]: I0217 15:15:38.445783 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4" exitCode=255 Feb 17 15:15:38.449594 master-0 kubenswrapper[26425]: I0217 15:15:38.449528 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/0.log" Feb 17 15:15:38.449736 master-0 kubenswrapper[26425]: I0217 15:15:38.449626 26425 generic.go:334] "Generic (PLEG): container finished" podID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerID="c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7" exitCode=1 Feb 17 15:15:38.452397 master-0 kubenswrapper[26425]: I0217 15:15:38.452282 26425 generic.go:334] "Generic (PLEG): container finished" podID="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" containerID="0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5" exitCode=0 Feb 17 15:15:38.455470 master-0 kubenswrapper[26425]: I0217 15:15:38.455408 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" exitCode=0 Feb 17 15:15:38.455470 master-0 kubenswrapper[26425]: I0217 15:15:38.455436 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="dbd9a864617d9861c878175db961027136a5f024e25d1d1a8f2532ea54b002da" exitCode=0 Feb 17 15:15:38.457978 master-0 kubenswrapper[26425]: I0217 15:15:38.457666 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:15:38.457978 master-0 kubenswrapper[26425]: I0217 15:15:38.457704 26425 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" exitCode=255 Feb 17 15:15:38.459867 master-0 kubenswrapper[26425]: I0217 15:15:38.459793 26425 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="2a42298516500c9bfa084c410231d2a27dee7fceed15779f0b27fd9d1349b2b0" exitCode=0 Feb 17 15:15:38.463603 master-0 kubenswrapper[26425]: I0217 15:15:38.463562 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="2128d8d38323586ed6d9716f5c0be6569fe807cb8c9948bb819a8f728039d87d" exitCode=0 Feb 17 15:15:38.472370 master-0 kubenswrapper[26425]: I0217 15:15:38.472303 26425 generic.go:334] "Generic (PLEG): container finished" podID="833c8661-28ca-463a-ac61-6edb961056e3" containerID="366ce4a350e8c8c3fa7539745bb67d208d67dd372e70a046a0ec8b361945197b" exitCode=0 Feb 17 15:15:38.472587 master-0 kubenswrapper[26425]: I0217 15:15:38.472378 26425 generic.go:334] "Generic (PLEG): container finished" podID="833c8661-28ca-463a-ac61-6edb961056e3" containerID="e6161530d918faa82eec69639876fbb5e67758f6bda51a345c33a6aeb147dce2" exitCode=0 Feb 17 15:15:38.474715 master-0 kubenswrapper[26425]: I0217 15:15:38.474670 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/2.log" Feb 17 15:15:38.474715 master-0 kubenswrapper[26425]: I0217 15:15:38.474712 26425 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" exitCode=255 Feb 17 15:15:38.476959 master-0 kubenswrapper[26425]: I0217 15:15:38.476904 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_580b240a-a806-454d-ab19-8f193a8d9ca2/installer/0.log" Feb 17 15:15:38.477206 master-0 kubenswrapper[26425]: I0217 15:15:38.477028 26425 generic.go:334] "Generic (PLEG): container finished" podID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerID="dcdeeb6985f895a6d59b345be94e95ea3c9c558f1f7b7901594a31fa91429102" exitCode=1 Feb 17 15:15:38.478555 master-0 kubenswrapper[26425]: I0217 15:15:38.478513 26425 generic.go:334] "Generic (PLEG): container finished" podID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerID="107e3fd578a275c186183eec1ef31542c82377b88843f3c540b45cab25720060" exitCode=0 Feb 17 15:15:38.480869 master-0 kubenswrapper[26425]: I0217 15:15:38.480812 26425 generic.go:334] "Generic (PLEG): container finished" podID="187af679-a062-4f41-81f2-33545f76febf" containerID="8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1" exitCode=0 Feb 17 15:15:38.488765 master-0 kubenswrapper[26425]: I0217 15:15:38.488478 26425 generic.go:334] "Generic (PLEG): container finished" podID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerID="b19e391b0150ed3b7b034d7cfb9dec3399203df0724feccc18bf70218b47fb07" exitCode=0 Feb 17 15:15:38.490577 master-0 kubenswrapper[26425]: I0217 15:15:38.490530 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:15:38.490577 master-0 kubenswrapper[26425]: I0217 15:15:38.490573 26425 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" exitCode=255 Feb 17 15:15:38.494161 master-0 kubenswrapper[26425]: E0217 15:15:38.494078 26425 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 17 15:15:38.495229 master-0 kubenswrapper[26425]: I0217 15:15:38.495199 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_03da22e3-956d-4c8a-bfd6-c1778e5d627c/installer/0.log" Feb 17 15:15:38.495339 master-0 kubenswrapper[26425]: I0217 15:15:38.495231 26425 generic.go:334] "Generic (PLEG): container finished" podID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerID="848358e86030aaad08f0f93cbd72a6dd3c9d1bf771c63059da694d462594c54f" exitCode=1 Feb 17 15:15:38.499524 master-0 kubenswrapper[26425]: I0217 15:15:38.498240 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:15:38.499524 master-0 kubenswrapper[26425]: I0217 15:15:38.498333 26425 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" exitCode=255 Feb 17 15:15:38.504181 master-0 kubenswrapper[26425]: I0217 15:15:38.504149 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:15:38.504861 master-0 kubenswrapper[26425]: I0217 15:15:38.504606 26425 generic.go:334] "Generic (PLEG): container finished" podID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerID="76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908" exitCode=1 Feb 17 15:15:38.511233 master-0 kubenswrapper[26425]: I0217 15:15:38.511166 26425 generic.go:334] "Generic (PLEG): container finished" podID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerID="43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2" exitCode=0 Feb 17 15:15:38.514037 master-0 kubenswrapper[26425]: I0217 15:15:38.513981 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/0.log" Feb 17 15:15:38.514149 master-0 kubenswrapper[26425]: I0217 15:15:38.514048 26425 generic.go:334] "Generic (PLEG): container finished" podID="071566ae-a9ae-4aa9-9dc3-38602363be72" containerID="8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144" exitCode=1 Feb 17 15:15:38.525303 master-0 kubenswrapper[26425]: I0217 15:15:38.525246 26425 generic.go:334] "Generic (PLEG): container finished" podID="31e31afc-79d5-46f4-9835-0fd11da9465f" containerID="a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab" exitCode=0 Feb 17 15:15:38.528094 master-0 kubenswrapper[26425]: I0217 15:15:38.528072 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/0.log" Feb 17 15:15:38.528741 master-0 kubenswrapper[26425]: I0217 15:15:38.528677 26425 generic.go:334] "Generic (PLEG): container finished" podID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerID="e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4" exitCode=1 Feb 17 15:15:38.531228 master-0 kubenswrapper[26425]: I0217 15:15:38.531095 26425 generic.go:334] "Generic (PLEG): container finished" podID="69b452fc-5e99-4947-a722-e47a602ac144" containerID="6b14f00d7fcb44fb3296b9acab65074a4551627d03279119eef48d40dd8b3ddd" exitCode=0 Feb 17 15:15:38.535053 master-0 kubenswrapper[26425]: I0217 15:15:38.535014 26425 generic.go:334] "Generic (PLEG): container finished" podID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" containerID="30149bc76c51652722af3b42f468490ae630728bcc0813cbee77856ab297e313" exitCode=0 Feb 17 15:15:38.539073 master-0 kubenswrapper[26425]: I0217 15:15:38.539033 26425 generic.go:334] "Generic (PLEG): container finished" podID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerID="3c59779e2c3acceff9a6741b9ce7f2f36e0bae77e413da5b192e5056ce1e9f29" exitCode=0 Feb 17 15:15:38.544247 master-0 kubenswrapper[26425]: I0217 15:15:38.544192 26425 generic.go:334] "Generic (PLEG): container finished" podID="c435347a-ac01-46af-8192-9ef2d632bdfb" containerID="ad81a3d8018f32fa460ffaba8c0d9ddd5cc3830a37ff5ffabe629586df64d1c4" exitCode=0 Feb 17 15:15:38.548991 master-0 kubenswrapper[26425]: I0217 15:15:38.548949 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/2.log" Feb 17 15:15:38.549078 master-0 kubenswrapper[26425]: I0217 15:15:38.549003 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" exitCode=1 Feb 17 15:15:38.551196 master-0 kubenswrapper[26425]: I0217 15:15:38.551168 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:15:38.551264 master-0 kubenswrapper[26425]: I0217 15:15:38.551211 26425 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" exitCode=255 Feb 17 15:15:38.552940 master-0 kubenswrapper[26425]: I0217 15:15:38.552897 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 17 15:15:38.553378 master-0 kubenswrapper[26425]: I0217 15:15:38.553351 26425 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a" exitCode=1 Feb 17 15:15:38.553431 master-0 kubenswrapper[26425]: I0217 15:15:38.553378 26425 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20" exitCode=0 Feb 17 15:15:38.559758 master-0 kubenswrapper[26425]: I0217 15:15:38.559723 26425 generic.go:334] "Generic (PLEG): container finished" podID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerID="3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59" exitCode=0 Feb 17 15:15:38.562936 master-0 kubenswrapper[26425]: I0217 15:15:38.562900 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:15:38.563001 master-0 kubenswrapper[26425]: I0217 15:15:38.562953 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" exitCode=255 Feb 17 15:15:38.566620 master-0 kubenswrapper[26425]: I0217 15:15:38.566591 26425 generic.go:334] "Generic (PLEG): container finished" podID="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" containerID="27d6533353fb312399276ec154189748ef75e2ff2e683e4077e0613293d79e27" exitCode=0 Feb 17 15:15:38.566620 master-0 kubenswrapper[26425]: I0217 15:15:38.566617 26425 generic.go:334] "Generic (PLEG): container finished" podID="94f5fac8-582e-44a3-8dd5-c4e6e80829ef" containerID="ce87d71e88525ce7001016bad4c33c6d78f8709a4b105679be6b276fa78e4ee0" exitCode=0 Feb 17 15:15:38.568539 master-0 kubenswrapper[26425]: I0217 15:15:38.568512 26425 generic.go:334] "Generic (PLEG): container finished" podID="8385a176-0e12-47ef-862e-8331e6734b9c" containerID="4adf8d0f12db14b67c44e524b550b78d1fa8f334eecf810d58480ad559d615cc" exitCode=0 Feb 17 15:15:38.578539 master-0 kubenswrapper[26425]: I0217 15:15:38.578503 26425 generic.go:334] "Generic (PLEG): container finished" podID="9a905fb6-17d4-413b-9107-859c804ce906" containerID="4af044cd84dfd56b4c3319dc9513fdcbc730d3ab6bf935acd230ad188ae43052" exitCode=0 Feb 17 15:15:38.585856 master-0 kubenswrapper[26425]: I0217 15:15:38.585817 26425 generic.go:334] "Generic (PLEG): container finished" podID="1d481a79-f565-4c7f-84cc-207fc3117c23" containerID="2f2131dad98f27e1c73aa268ad99c1866a1a7604c47baa9d4290fb47581335fc" exitCode=0 Feb 17 15:15:38.590307 master-0 kubenswrapper[26425]: I0217 15:15:38.590264 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:15:38.590376 master-0 kubenswrapper[26425]: I0217 15:15:38.590325 26425 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" exitCode=255 Feb 17 15:15:38.603561 master-0 kubenswrapper[26425]: I0217 15:15:38.603517 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="b12f57b0bcc09e05fc64e8bd7a3e3439eada3a066486077463244aa7f48a9765" exitCode=0 Feb 17 15:15:38.603671 master-0 kubenswrapper[26425]: I0217 15:15:38.603563 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="58ed4f24a4a8563ec3660532e43504b78aecdeaa56673d4b14d15679424a7551" exitCode=0 Feb 17 15:15:38.603671 master-0 kubenswrapper[26425]: I0217 15:15:38.603583 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="921ee0fd3551059043b76ac59a478c682da16c6ee7724deecc9c4ab0ac65da91" exitCode=0 Feb 17 15:15:38.603671 master-0 kubenswrapper[26425]: I0217 15:15:38.603600 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="a8fe5731cc729bce660d47070861b2907343fcae8bee470838edf68c6e2b5e34" exitCode=0 Feb 17 15:15:38.603671 master-0 kubenswrapper[26425]: I0217 15:15:38.603619 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="ecd77d78fcca655bc8210302308e24b74646b466ebece2fff52e85f8b57c4842" exitCode=0 Feb 17 15:15:38.603671 master-0 kubenswrapper[26425]: I0217 15:15:38.603632 26425 generic.go:334] "Generic (PLEG): container finished" podID="fb153362-0abb-4aad-8975-532f6e72d032" containerID="2f86c60a93c3453ced4f5b52ce187e665f2ac8baeed7a329b64029f9d992f515" exitCode=0 Feb 17 15:15:38.606439 master-0 kubenswrapper[26425]: I0217 15:15:38.606409 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/1.log" Feb 17 15:15:38.606514 master-0 kubenswrapper[26425]: I0217 15:15:38.606486 26425 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" containerID="6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4" exitCode=255 Feb 17 15:15:38.609101 master-0 kubenswrapper[26425]: I0217 15:15:38.609076 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:15:38.609166 master-0 kubenswrapper[26425]: I0217 15:15:38.609128 26425 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c" exitCode=255 Feb 17 15:15:38.614376 master-0 kubenswrapper[26425]: I0217 15:15:38.614328 26425 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52" exitCode=0 Feb 17 15:15:38.614376 master-0 kubenswrapper[26425]: I0217 15:15:38.614365 26425 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="e00b7f9ba119fe3dfcee010018caac115fb3546638de62f638b07484db483416" exitCode=0 Feb 17 15:15:38.614376 master-0 kubenswrapper[26425]: I0217 15:15:38.614379 26425 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="71bdfb60886bbb8d8fa44c7be910c5770371e11fcb5309d4a7d66f5e45dddf82" exitCode=0 Feb 17 15:15:38.620597 master-0 kubenswrapper[26425]: I0217 15:15:38.620560 26425 generic.go:334] "Generic (PLEG): container finished" podID="d5655115-c223-42ed-a93d-9d609e55c901" containerID="a7a559907a49f4d8137e14ad794efe3aea73d7c66ce8d886c715988a380ea29f" exitCode=0 Feb 17 15:15:38.624247 master-0 kubenswrapper[26425]: I0217 15:15:38.624206 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:15:38.624326 master-0 kubenswrapper[26425]: I0217 15:15:38.624270 26425 generic.go:334] "Generic (PLEG): container finished" podID="2b167b7b-2280-4c82-ac78-71c57aebe503" containerID="477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f" exitCode=255 Feb 17 15:15:38.626746 master-0 kubenswrapper[26425]: I0217 15:15:38.626693 26425 generic.go:334] "Generic (PLEG): container finished" podID="fc216ba1-144a-4cc8-93db-85ab558a166a" containerID="dff43540c3d3c78b976c453950a947c70e5ecf684af153fa53013b3b0706b588" exitCode=0 Feb 17 15:15:38.626746 master-0 kubenswrapper[26425]: I0217 15:15:38.626722 26425 generic.go:334] "Generic (PLEG): container finished" podID="fc216ba1-144a-4cc8-93db-85ab558a166a" containerID="20e73b882d712a2eff1c90da1b92bbca3203e89b488f6982191f5a6e45f5694f" exitCode=0 Feb 17 15:15:38.631013 master-0 kubenswrapper[26425]: I0217 15:15:38.630961 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="bafb1d40abea56e15a55f39238f52822a8e7d4c344f770507c71ed614feff320" exitCode=0 Feb 17 15:15:38.631013 master-0 kubenswrapper[26425]: I0217 15:15:38.630996 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="af8466a0f113f0fd847f0bfc35cfb14199d76e2d0ce6a9816135658a53c788cd" exitCode=0 Feb 17 15:15:38.631013 master-0 kubenswrapper[26425]: I0217 15:15:38.631005 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d" exitCode=0 Feb 17 15:15:38.635019 master-0 kubenswrapper[26425]: I0217 15:15:38.634981 26425 generic.go:334] "Generic (PLEG): container finished" podID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" containerID="b4983b136a273fbed3a16f2bc55aeaf26026f904d63f46d8bea39f01aefc2517" exitCode=0 Feb 17 15:15:38.635019 master-0 kubenswrapper[26425]: I0217 15:15:38.635007 26425 generic.go:334] "Generic (PLEG): container finished" podID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" containerID="699c72ab46ee0eb32b4612336334e94bd1b80ff4aefacb6b8eb9094947e725a5" exitCode=0 Feb 17 15:15:38.639413 master-0 kubenswrapper[26425]: I0217 15:15:38.639371 26425 generic.go:334] "Generic (PLEG): container finished" podID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerID="cb6f158d1d6f36179663edca7ac4c45ccbc5d1b74a343aa83cc519a613a49048" exitCode=0 Feb 17 15:15:38.694294 master-0 kubenswrapper[26425]: E0217 15:15:38.694252 26425 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 17 15:15:38.810509 master-0 kubenswrapper[26425]: I0217 15:15:38.810439 26425 manager.go:324] Recovery completed Feb 17 15:15:38.896936 master-0 kubenswrapper[26425]: I0217 15:15:38.896863 26425 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:15:38.896936 master-0 kubenswrapper[26425]: I0217 15:15:38.896913 26425 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:15:38.897173 master-0 kubenswrapper[26425]: I0217 15:15:38.896950 26425 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:15:38.897361 master-0 kubenswrapper[26425]: I0217 15:15:38.897286 26425 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 17 15:15:38.897361 master-0 kubenswrapper[26425]: I0217 15:15:38.897316 26425 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 17 15:15:38.897361 master-0 kubenswrapper[26425]: I0217 15:15:38.897358 26425 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 17 15:15:38.897527 master-0 kubenswrapper[26425]: I0217 15:15:38.897375 26425 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 17 15:15:38.897527 master-0 kubenswrapper[26425]: I0217 15:15:38.897392 26425 policy_none.go:49] "None policy: Start" Feb 17 15:15:38.901998 master-0 kubenswrapper[26425]: I0217 15:15:38.901954 26425 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:15:38.902061 master-0 kubenswrapper[26425]: I0217 15:15:38.902009 26425 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:15:38.902395 master-0 kubenswrapper[26425]: I0217 15:15:38.902328 26425 state_mem.go:75] "Updated machine memory state" Feb 17 15:15:38.902395 master-0 kubenswrapper[26425]: I0217 15:15:38.902356 26425 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 17 15:15:38.924709 master-0 kubenswrapper[26425]: I0217 15:15:38.924662 26425 manager.go:334] "Starting Device Plugin manager" Feb 17 15:15:38.924767 master-0 kubenswrapper[26425]: I0217 15:15:38.924726 26425 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:15:38.924767 master-0 kubenswrapper[26425]: I0217 15:15:38.924744 26425 server.go:79] "Starting device plugin registration server" Feb 17 15:15:38.925197 master-0 kubenswrapper[26425]: I0217 15:15:38.925168 26425 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:15:38.925241 master-0 kubenswrapper[26425]: I0217 15:15:38.925191 26425 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:15:38.925401 master-0 kubenswrapper[26425]: I0217 15:15:38.925372 26425 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:15:38.925503 master-0 kubenswrapper[26425]: I0217 15:15:38.925481 26425 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:15:38.925503 master-0 kubenswrapper[26425]: I0217 15:15:38.925498 26425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:15:39.025596 master-0 kubenswrapper[26425]: I0217 15:15:39.025477 26425 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:15:39.028894 master-0 kubenswrapper[26425]: I0217 15:15:39.028824 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:15:39.028990 master-0 kubenswrapper[26425]: I0217 15:15:39.028912 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:15:39.028990 master-0 kubenswrapper[26425]: I0217 15:15:39.028934 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:15:39.029144 master-0 kubenswrapper[26425]: I0217 15:15:39.029114 26425 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:15:39.035385 master-0 kubenswrapper[26425]: E0217 15:15:39.035351 26425 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 17 15:15:39.095152 master-0 kubenswrapper[26425]: I0217 15:15:39.095019 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 17 15:15:39.096047 master-0 kubenswrapper[26425]: I0217 15:15:39.096009 26425 scope.go:117] "RemoveContainer" containerID="fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" Feb 17 15:15:39.096206 master-0 kubenswrapper[26425]: I0217 15:15:39.096170 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ea29fc08e254fc3e14a364622e4facf6b96ac258189e8fa32888318e699341" Feb 17 15:15:39.096341 master-0 kubenswrapper[26425]: I0217 15:15:39.096321 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52fdc1dd27ec41c605dddba64c8150b4679f17e771419dec6733185ac88edf76" Feb 17 15:15:39.096641 master-0 kubenswrapper[26425]: I0217 15:15:39.096534 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933"} Feb 17 15:15:39.096802 master-0 kubenswrapper[26425]: I0217 15:15:39.096779 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d"} Feb 17 15:15:39.096893 master-0 kubenswrapper[26425]: I0217 15:15:39.096876 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387"} Feb 17 15:15:39.096987 master-0 kubenswrapper[26425]: I0217 15:15:39.096968 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821"} Feb 17 15:15:39.097107 master-0 kubenswrapper[26425]: I0217 15:15:39.097085 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a"} Feb 17 15:15:39.097220 master-0 kubenswrapper[26425]: I0217 15:15:39.097198 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"35fe638f6458381f305a5bf70c5f72c08dfe6647c1374e528fdd2425345b92ec"} Feb 17 15:15:39.097339 master-0 kubenswrapper[26425]: I0217 15:15:39.097318 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c"} Feb 17 15:15:39.097484 master-0 kubenswrapper[26425]: I0217 15:15:39.097438 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} Feb 17 15:15:39.097649 master-0 kubenswrapper[26425]: I0217 15:15:39.097628 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd"} Feb 17 15:15:39.097740 master-0 kubenswrapper[26425]: I0217 15:15:39.097725 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"bdb8ad9bd5f944be0c16716ab7cf723ba4fecb8874a24d8035e247bed4275d02"} Feb 17 15:15:39.097895 master-0 kubenswrapper[26425]: I0217 15:15:39.097875 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb1dadfa9fa746e498f74fe7c1710620a7f822dde2a54f2002cb48a072a2427" Feb 17 15:15:39.097989 master-0 kubenswrapper[26425]: I0217 15:15:39.097973 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"68a438a4e14f80804f842c0c44dfda76c0251a3c52afe081bbd14694a703898a"} Feb 17 15:15:39.098072 master-0 kubenswrapper[26425]: I0217 15:15:39.098058 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"0a6f90db7355282c99c29dbf0363e0633a9d55c0e8f232d859147cef7d241a54"} Feb 17 15:15:39.098158 master-0 kubenswrapper[26425]: I0217 15:15:39.098143 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a"} Feb 17 15:15:39.098298 master-0 kubenswrapper[26425]: I0217 15:15:39.098282 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"0f85b3342f5b9ee3681b487c6f9af1503246e3aa95e4fcb3fbc34dc5c76ae7fa"} Feb 17 15:15:39.098379 master-0 kubenswrapper[26425]: I0217 15:15:39.098365 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"39d90e2b00141a0c491cc3ec8392a600a6a01595195a3aac176f6c4f99d06ad8"} Feb 17 15:15:39.098495 master-0 kubenswrapper[26425]: I0217 15:15:39.098448 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"2128d8d38323586ed6d9716f5c0be6569fe807cb8c9948bb819a8f728039d87d"} Feb 17 15:15:39.098668 master-0 kubenswrapper[26425]: I0217 15:15:39.098644 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"037eeb0eb6e9db7c0c16d981af4599e4cf0a6c4e36b47a40589e4b6308c2db61"} Feb 17 15:15:39.098810 master-0 kubenswrapper[26425]: I0217 15:15:39.098788 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d438363c001bf717835978e9fb2dcc240d924c535bb18d220d0dd81ba4eceb10" Feb 17 15:15:39.098955 master-0 kubenswrapper[26425]: I0217 15:15:39.098934 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc106479f8ba2301c0905fc79952057832731752fc004c203824ce711aec45fb" Feb 17 15:15:39.099075 master-0 kubenswrapper[26425]: I0217 15:15:39.099054 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b31871b8085707dfa74452a2934f0c0323ff06325d382d8b3f5e4dc6e4076e7" Feb 17 15:15:39.099197 master-0 kubenswrapper[26425]: I0217 15:15:39.099182 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2dd0a0688727e052252cd2506303293a622de765553e0bfacc8554a72cd3817" Feb 17 15:15:39.099293 master-0 kubenswrapper[26425]: I0217 15:15:39.099280 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d00efdad4851844a32b2b8bd4e17fbebfd887cf8eba9c8198aa34f66fbdd5b6" Feb 17 15:15:39.099432 master-0 kubenswrapper[26425]: I0217 15:15:39.099418 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee1ada2125277c0b6cce472a26bd7b393be00724a19ccb2e1067f7f0c7cb926" Feb 17 15:15:39.099567 master-0 kubenswrapper[26425]: I0217 15:15:39.099550 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82581365f6f274c239792085af3cda355d57d00d3bb74c93451eabd859e47a2b" Feb 17 15:15:39.099795 master-0 kubenswrapper[26425]: I0217 15:15:39.099775 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c288b739b6f5a5ed27ebb0ee29250c354834beafa88e6c2215d397b878664c43" Feb 17 15:15:39.100582 master-0 kubenswrapper[26425]: I0217 15:15:39.100554 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"518b836a67d98b0cf5a2e8d843574e61038c30a6058fcd6123417dc9c4975d78"} Feb 17 15:15:39.100709 master-0 kubenswrapper[26425]: I0217 15:15:39.100686 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"7ee371ff3fea654567b16adfcbd47a6ebbd168a2f1e33c4562b559cfe498844a"} Feb 17 15:15:39.100822 master-0 kubenswrapper[26425]: I0217 15:15:39.100802 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"b7bba1848d8e5849cd7385799efab8edc5b4febf88a3e8ee8efae1fdf0ca6b20"} Feb 17 15:15:39.100934 master-0 kubenswrapper[26425]: I0217 15:15:39.100915 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"77a5b96685468a1686135c8d7d6d053d9bc8223dda29da38cb0e4b9ffeb56e90"} Feb 17 15:15:39.101046 master-0 kubenswrapper[26425]: I0217 15:15:39.101024 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"4b556a21109d55e0fc1179b5cad47796ec1a964c7618f1e0977b12773c406661"} Feb 17 15:15:39.101159 master-0 kubenswrapper[26425]: I0217 15:15:39.101139 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"1c9e969e18b1411cff6ba15e9601c6a1a570693b9fa41b729154f36c3d4cfc86"} Feb 17 15:15:39.101441 master-0 kubenswrapper[26425]: I0217 15:15:39.101418 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa0fac2ee75614ddf9c33905ca49667c9eb5815d489ea328caebd435d408a71" Feb 17 15:15:39.101613 master-0 kubenswrapper[26425]: I0217 15:15:39.101591 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"24bcd9a1fa449d31774c0b2f9747f9f7a7d21ce729de71f7dbfd671b89feec54"} Feb 17 15:15:39.101737 master-0 kubenswrapper[26425]: I0217 15:15:39.101714 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"a52477200afc38c91a493a196c8111943fbf6121e870a10ff7e849d590f6609a"} Feb 17 15:15:39.101849 master-0 kubenswrapper[26425]: I0217 15:15:39.101829 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"7dd053c55331a8a0d792d5a78e488f015a947989e3e1383dcd1a64fa486a01e5"} Feb 17 15:15:39.101964 master-0 kubenswrapper[26425]: I0217 15:15:39.101943 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"9c473e6b1c42e4e97ed6d31b0e52ea86736af7b5464544e2ffea713e961e55df"} Feb 17 15:15:39.102091 master-0 kubenswrapper[26425]: I0217 15:15:39.102066 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"cb3dbeb96630f3d5109d6c4e5a32fbf46326a5066238f4c05eb31fd67e0570ad"} Feb 17 15:15:39.102205 master-0 kubenswrapper[26425]: I0217 15:15:39.102182 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"bafb1d40abea56e15a55f39238f52822a8e7d4c344f770507c71ed614feff320"} Feb 17 15:15:39.102325 master-0 kubenswrapper[26425]: I0217 15:15:39.102303 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"af8466a0f113f0fd847f0bfc35cfb14199d76e2d0ce6a9816135658a53c788cd"} Feb 17 15:15:39.102445 master-0 kubenswrapper[26425]: I0217 15:15:39.102424 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d"} Feb 17 15:15:39.102618 master-0 kubenswrapper[26425]: I0217 15:15:39.102594 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"cff1bcb58e476c7626406f50da253d7834cc1bd8b48bce0f6a4957d02e2b8cc9"} Feb 17 15:15:39.120932 master-0 kubenswrapper[26425]: E0217 15:15:39.120890 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.121398 master-0 kubenswrapper[26425]: E0217 15:15:39.121378 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.121614 master-0 kubenswrapper[26425]: E0217 15:15:39.121408 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.128294 master-0 kubenswrapper[26425]: I0217 15:15:39.128273 26425 scope.go:117] "RemoveContainer" containerID="fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" Feb 17 15:15:39.128976 master-0 kubenswrapper[26425]: E0217 15:15:39.128913 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591\": container with ID starting with fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591 not found: ID does not exist" containerID="fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591" Feb 17 15:15:39.129119 master-0 kubenswrapper[26425]: I0217 15:15:39.128991 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591"} err="failed to get container status \"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591\": rpc error: code = NotFound desc = could not find container \"fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591\": container with ID starting with fb13255312949f71c7f647e8894c0ba65b6939b0e0373e6d2aac176d8658b591 not found: ID does not exist" Feb 17 15:15:39.186506 master-0 kubenswrapper[26425]: I0217 15:15:39.186392 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.186506 master-0 kubenswrapper[26425]: I0217 15:15:39.186482 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186519 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186597 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186645 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186694 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186740 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186782 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186830 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186872 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.186967 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187036 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187087 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187132 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187174 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187216 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187262 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.187317 master-0 kubenswrapper[26425]: I0217 15:15:39.187307 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.188383 master-0 kubenswrapper[26425]: I0217 15:15:39.187369 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.188383 master-0 kubenswrapper[26425]: I0217 15:15:39.187415 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.235902 master-0 kubenswrapper[26425]: I0217 15:15:39.235847 26425 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:15:39.243926 master-0 kubenswrapper[26425]: I0217 15:15:39.243854 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:15:39.244659 master-0 kubenswrapper[26425]: I0217 15:15:39.244594 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:15:39.244780 master-0 kubenswrapper[26425]: I0217 15:15:39.244673 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:15:39.244963 master-0 kubenswrapper[26425]: I0217 15:15:39.244904 26425 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:15:39.249528 master-0 kubenswrapper[26425]: E0217 15:15:39.249433 26425 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 17 15:15:39.288738 master-0 kubenswrapper[26425]: I0217 15:15:39.288593 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.288738 master-0 kubenswrapper[26425]: I0217 15:15:39.288654 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.288738 master-0 kubenswrapper[26425]: I0217 15:15:39.288697 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.289048 master-0 kubenswrapper[26425]: I0217 15:15:39.288893 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.289048 master-0 kubenswrapper[26425]: I0217 15:15:39.289010 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289233 master-0 kubenswrapper[26425]: I0217 15:15:39.289087 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289233 master-0 kubenswrapper[26425]: I0217 15:15:39.289092 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.289233 master-0 kubenswrapper[26425]: I0217 15:15:39.289196 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289233 master-0 kubenswrapper[26425]: I0217 15:15:39.289205 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289546 master-0 kubenswrapper[26425]: I0217 15:15:39.289286 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.289546 master-0 kubenswrapper[26425]: I0217 15:15:39.289271 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289546 master-0 kubenswrapper[26425]: I0217 15:15:39.289431 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.289546 master-0 kubenswrapper[26425]: I0217 15:15:39.289512 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289565 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289577 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289622 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289694 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289750 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.289812 master-0 kubenswrapper[26425]: I0217 15:15:39.289799 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.289838 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.289927 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.289992 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.290008 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.290084 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.290085 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:39.290165 master-0 kubenswrapper[26425]: I0217 15:15:39.290142 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290207 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290255 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290302 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290309 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290383 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290438 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290502 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290534 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290539 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290593 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.290656 master-0 kubenswrapper[26425]: I0217 15:15:39.290599 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.291285 master-0 kubenswrapper[26425]: I0217 15:15:39.290680 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.291285 master-0 kubenswrapper[26425]: I0217 15:15:39.290632 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"27fd92ef556705625a2e4f1011322252\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:39.291285 master-0 kubenswrapper[26425]: I0217 15:15:39.290747 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:39.296824 master-0 kubenswrapper[26425]: I0217 15:15:39.296776 26425 apiserver.go:52] "Watching apiserver" Feb 17 15:15:39.330035 master-0 kubenswrapper[26425]: I0217 15:15:39.329942 26425 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:15:39.333259 master-0 kubenswrapper[26425]: I0217 15:15:39.333157 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls","openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f","openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs","openshift-monitoring/metrics-server-f94977f65-sgf5z","openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm","openshift-insights/insights-operator-cb4f7b4cf-cmbjq","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9","openshift-network-node-identity/network-node-identity-xwftw","openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh","openshift-multus/multus-additional-cni-plugins-9nv95","openshift-network-operator/iptables-alerter-v2h9q","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5","openshift-dns/dns-default-wxhtx","openshift-ingress-canary/ingress-canary-6bhf8","openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq","openshift-kube-controller-manager/installer-2-master-0","openshift-monitoring/prometheus-operator-7485d645b8-nzz2j","openshift-ovn-kubernetes/ovnkube-node-vdgrn","openshift-service-ca/service-ca-676cd8b9b5-bfm5s","openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph","openshift-marketplace/certified-operators-2lg56","assisted-installer/assisted-installer-controller-5fwlz","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8","openshift-cluster-node-tuning-operator/tuned-2ffzt","openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d","openshift-machine-config-operator/machine-config-daemon-r6sfp","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-network-operator/network-operator-6fcf4c966-l24cg","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b","openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n","openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg","openshift-multus/multus-admission-controller-6d678b8d67-rzbff","openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95","openshift-apiserver/apiserver-6bd884947c-tdlbn","openshift-dns/node-resolver-tzv2h","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c","openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245","openshift-kube-scheduler/installer-5-master-0","openshift-machine-config-operator/machine-config-server-l576h","openshift-multus/multus-9r5rl","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766","openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7","openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv","openshift-kube-controller-manager/installer-4-master-0","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7","openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc","openshift-dns-operator/dns-operator-86b8869b79-lmqrr","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd","openshift-monitoring/telemeter-client-7fbdcd9689-spqtt","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9","openshift-multus/network-metrics-daemon-bnllz","openshift-network-diagnostics/network-check-target-f25s7","openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-marketplace/redhat-operators-wzsv7","openshift-marketplace/redhat-marketplace-7dzgz","openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9","openshift-authentication-operator/authentication-operator-755d954778-jrdqm","openshift-etcd/installer-1-master-0","openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7","openshift-oauth-apiserver/apiserver-865765995-c58rq","openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25","openshift-kube-apiserver/kube-apiserver-master-0","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw","openshift-monitoring/node-exporter-rttp2","openshift-marketplace/community-operators-t8vtc","openshift-ingress/router-default-864ddd5f56-g8w2f","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:15:39.333623 master-0 kubenswrapper[26425]: I0217 15:15:39.333576 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-5fwlz" Feb 17 15:15:39.378699 master-0 kubenswrapper[26425]: I0217 15:15:39.378592 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:15:39.383597 master-0 kubenswrapper[26425]: I0217 15:15:39.383522 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.384980 master-0 kubenswrapper[26425]: I0217 15:15:39.384941 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:15:39.385262 master-0 kubenswrapper[26425]: I0217 15:15:39.385217 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.385469 master-0 kubenswrapper[26425]: I0217 15:15:39.385423 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 17 15:15:39.385739 master-0 kubenswrapper[26425]: I0217 15:15:39.385706 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:15:39.386299 master-0 kubenswrapper[26425]: I0217 15:15:39.386249 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:15:39.386772 master-0 kubenswrapper[26425]: I0217 15:15:39.386744 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:15:39.386864 master-0 kubenswrapper[26425]: I0217 15:15:39.386793 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:15:39.387695 master-0 kubenswrapper[26425]: I0217 15:15:39.387665 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:15:39.389284 master-0 kubenswrapper[26425]: I0217 15:15:39.387839 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:15:39.389428 master-0 kubenswrapper[26425]: I0217 15:15:39.389048 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:15:39.397187 master-0 kubenswrapper[26425]: I0217 15:15:39.397114 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h" Feb 17 15:15:39.398079 master-0 kubenswrapper[26425]: I0217 15:15:39.398019 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.398164 master-0 kubenswrapper[26425]: I0217 15:15:39.398096 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.398208 master-0 kubenswrapper[26425]: I0217 15:15:39.398163 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.398252 master-0 kubenswrapper[26425]: I0217 15:15:39.398206 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.398302 master-0 kubenswrapper[26425]: I0217 15:15:39.398251 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.398302 master-0 kubenswrapper[26425]: I0217 15:15:39.398291 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.398376 master-0 kubenswrapper[26425]: I0217 15:15:39.398328 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.398420 master-0 kubenswrapper[26425]: I0217 15:15:39.398378 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.398505 master-0 kubenswrapper[26425]: I0217 15:15:39.398432 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.398555 master-0 kubenswrapper[26425]: I0217 15:15:39.398497 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.398616 master-0 kubenswrapper[26425]: I0217 15:15:39.398562 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.398682 master-0 kubenswrapper[26425]: I0217 15:15:39.398616 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.398682 master-0 kubenswrapper[26425]: I0217 15:15:39.398649 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.398928 master-0 kubenswrapper[26425]: I0217 15:15:39.398698 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.398928 master-0 kubenswrapper[26425]: I0217 15:15:39.398742 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.413392 master-0 kubenswrapper[26425]: I0217 15:15:39.400951 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/6c734c89-515e-4ff0-82d1-831ddaf0b99e-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.413392 master-0 kubenswrapper[26425]: I0217 15:15:39.401294 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-service-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.413392 master-0 kubenswrapper[26425]: I0217 15:15:39.401556 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d9f008-7777-48fe-85fe-9d54a7bbcea9-config\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.413392 master-0 kubenswrapper[26425]: I0217 15:15:39.401767 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af61bda0-c7b4-489d-a671-eaa5299942fe-config\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.413392 master-0 kubenswrapper[26425]: I0217 15:15:39.413197 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65d9f008-7777-48fe-85fe-9d54a7bbcea9-serving-cert\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.413791 master-0 kubenswrapper[26425]: I0217 15:15:39.398786 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.413855 master-0 kubenswrapper[26425]: I0217 15:15:39.413786 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:15:39.413855 master-0 kubenswrapper[26425]: I0217 15:15:39.413823 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.413936 master-0 kubenswrapper[26425]: I0217 15:15:39.413859 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:15:39.413936 master-0 kubenswrapper[26425]: I0217 15:15:39.413894 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.413936 master-0 kubenswrapper[26425]: I0217 15:15:39.413924 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.414057 master-0 kubenswrapper[26425]: I0217 15:15:39.413934 26425 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3716863b-22a6-4f57-9c98-e5f2c96e601c" Feb 17 15:15:39.414705 master-0 kubenswrapper[26425]: I0217 15:15:39.414609 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/61d90bf3-02df-48c8-b2ec-09a1653b0800-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.415128 master-0 kubenswrapper[26425]: I0217 15:15:39.415081 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-config\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.415860 master-0 kubenswrapper[26425]: I0217 15:15:39.413951 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.415969 master-0 kubenswrapper[26425]: I0217 15:15:39.415876 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.415969 master-0 kubenswrapper[26425]: I0217 15:15:39.415920 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.415969 master-0 kubenswrapper[26425]: I0217 15:15:39.415960 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.416178 master-0 kubenswrapper[26425]: I0217 15:15:39.415999 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.416178 master-0 kubenswrapper[26425]: I0217 15:15:39.416074 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.416178 master-0 kubenswrapper[26425]: I0217 15:15:39.416113 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.416178 master-0 kubenswrapper[26425]: I0217 15:15:39.416149 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416187 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416222 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416256 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416288 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416326 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.416743 master-0 kubenswrapper[26425]: I0217 15:15:39.416665 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.416829 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.416857 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/187af679-a062-4f41-81f2-33545f76febf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.416934 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.417090 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.417278 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.417319 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc76384d-b288-4d30-bc77-f696b62a5f30-metrics-tls\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.419092 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 17 15:15:39.421130 master-0 kubenswrapper[26425]: I0217 15:15:39.419721 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 17 15:15:39.422136 master-0 kubenswrapper[26425]: I0217 15:15:39.422095 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:15:39.422255 master-0 kubenswrapper[26425]: I0217 15:15:39.422098 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af61bda0-c7b4-489d-a671-eaa5299942fe-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.422801 master-0 kubenswrapper[26425]: I0217 15:15:39.422728 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:15:39.422924 master-0 kubenswrapper[26425]: I0217 15:15:39.422879 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 17 15:15:39.423536 master-0 kubenswrapper[26425]: I0217 15:15:39.423499 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c734c89-515e-4ff0-82d1-831ddaf0b99e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.423751 master-0 kubenswrapper[26425]: I0217 15:15:39.423712 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 17 15:15:39.424642 master-0 kubenswrapper[26425]: I0217 15:15:39.424587 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.424748 master-0 kubenswrapper[26425]: I0217 15:15:39.424734 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:15:39.425032 master-0 kubenswrapper[26425]: I0217 15:15:39.424990 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:15:39.425113 master-0 kubenswrapper[26425]: I0217 15:15:39.424992 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:15:39.425210 master-0 kubenswrapper[26425]: I0217 15:15:39.425159 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.425280 master-0 kubenswrapper[26425]: I0217 15:15:39.425246 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:15:39.425424 master-0 kubenswrapper[26425]: I0217 15:15:39.425376 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:15:39.425520 master-0 kubenswrapper[26425]: I0217 15:15:39.425490 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.425772 master-0 kubenswrapper[26425]: I0217 15:15:39.425730 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:15:39.425974 master-0 kubenswrapper[26425]: I0217 15:15:39.425931 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 17 15:15:39.426041 master-0 kubenswrapper[26425]: I0217 15:15:39.425994 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:15:39.426178 master-0 kubenswrapper[26425]: I0217 15:15:39.426141 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:15:39.426637 master-0 kubenswrapper[26425]: I0217 15:15:39.426600 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.426835 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427032 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427088 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.425934 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427189 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427213 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427253 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427275 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427406 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427522 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427521 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427565 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427668 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427730 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427784 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427791 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427811 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22a30079-d7fc-49cf-882e-1c5022cb5bf6-metrics-tls\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427737 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427891 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427988 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.428000 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427905 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.427908 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.428157 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e259b5a1-837b-4cde-85f7-cd5781af08bd-serving-cert\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.428240 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.428260 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.428827 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.429285 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e259b5a1-837b-4cde-85f7-cd5781af08bd-config\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.429587 master-0 kubenswrapper[26425]: I0217 15:15:39.429620 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:15:39.431621 master-0 kubenswrapper[26425]: I0217 15:15:39.429767 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:15:39.431621 master-0 kubenswrapper[26425]: I0217 15:15:39.429935 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:15:39.431621 master-0 kubenswrapper[26425]: I0217 15:15:39.430685 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-serving-cert\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.431621 master-0 kubenswrapper[26425]: I0217 15:15:39.431303 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:15:39.431878 master-0 kubenswrapper[26425]: I0217 15:15:39.431841 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:15:39.432519 master-0 kubenswrapper[26425]: I0217 15:15:39.432483 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.435379 master-0 kubenswrapper[26425]: I0217 15:15:39.435336 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:15:39.435507 master-0 kubenswrapper[26425]: I0217 15:15:39.435478 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:15:39.436092 master-0 kubenswrapper[26425]: I0217 15:15:39.436062 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:15:39.436345 master-0 kubenswrapper[26425]: I0217 15:15:39.436104 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:15:39.442932 master-0 kubenswrapper[26425]: I0217 15:15:39.442865 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-config\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.444727 master-0 kubenswrapper[26425]: I0217 15:15:39.444687 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 17 15:15:39.444945 master-0 kubenswrapper[26425]: I0217 15:15:39.444911 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 15:15:39.445142 master-0 kubenswrapper[26425]: I0217 15:15:39.445111 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:15:39.446749 master-0 kubenswrapper[26425]: I0217 15:15:39.446696 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:15:39.446889 master-0 kubenswrapper[26425]: I0217 15:15:39.446862 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:15:39.447035 master-0 kubenswrapper[26425]: I0217 15:15:39.447008 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 15:15:39.447391 master-0 kubenswrapper[26425]: I0217 15:15:39.447360 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:15:39.452246 master-0 kubenswrapper[26425]: I0217 15:15:39.452193 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61d90bf3-02df-48c8-b2ec-09a1653b0800-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.457164 master-0 kubenswrapper[26425]: I0217 15:15:39.457030 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:15:39.457164 master-0 kubenswrapper[26425]: I0217 15:15:39.457094 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:15:39.457378 master-0 kubenswrapper[26425]: I0217 15:15:39.457211 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:15:39.457580 master-0 kubenswrapper[26425]: I0217 15:15:39.457551 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:15:39.457860 master-0 kubenswrapper[26425]: I0217 15:15:39.457841 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:15:39.458041 master-0 kubenswrapper[26425]: I0217 15:15:39.458018 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 15:15:39.458274 master-0 kubenswrapper[26425]: I0217 15:15:39.457842 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:15:39.458584 master-0 kubenswrapper[26425]: I0217 15:15:39.458210 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:15:39.458761 master-0 kubenswrapper[26425]: I0217 15:15:39.458725 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.459857 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.460107 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh874\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-kube-api-access-bh874\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.460311 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e259b5a1-837b-4cde-85f7-cd5781af08bd-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-p5mdv\" (UID: \"e259b5a1-837b-4cde-85f7-cd5781af08bd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.460516 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.460735 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:15:39.460991 master-0 kubenswrapper[26425]: I0217 15:15:39.460740 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:15:39.469544 master-0 kubenswrapper[26425]: I0217 15:15:39.463892 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqt4\" (UniqueName: \"kubernetes.io/projected/801742a6-3735-4883-9676-e852dc4173d2-kube-api-access-qxqt4\") pod \"csi-snapshot-controller-operator-7b87b97578-9fpgj\" (UID: \"801742a6-3735-4883-9676-e852dc4173d2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" Feb 17 15:15:39.469544 master-0 kubenswrapper[26425]: I0217 15:15:39.465588 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbvx\" (UniqueName: \"kubernetes.io/projected/61d90bf3-02df-48c8-b2ec-09a1653b0800-kube-api-access-5wbvx\") pod \"openshift-config-operator-7c6bdb986f-fcnqs\" (UID: \"61d90bf3-02df-48c8-b2ec-09a1653b0800\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:39.470204 master-0 kubenswrapper[26425]: I0217 15:15:39.469852 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-trusted-ca-bundle\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.471825 master-0 kubenswrapper[26425]: I0217 15:15:39.471766 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:15:39.474543 master-0 kubenswrapper[26425]: I0217 15:15:39.472141 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:15:39.474543 master-0 kubenswrapper[26425]: I0217 15:15:39.473571 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 15:15:39.478996 master-0 kubenswrapper[26425]: I0217 15:15:39.478939 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgqg\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-kube-api-access-jpgqg\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.479598 master-0 kubenswrapper[26425]: I0217 15:15:39.479500 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:15:39.480983 master-0 kubenswrapper[26425]: I0217 15:15:39.480895 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7w4\" (UniqueName: \"kubernetes.io/projected/af61bda0-c7b4-489d-a671-eaa5299942fe-kube-api-access-jt7w4\") pod \"openshift-apiserver-operator-6d4655d9cf-5f5g9\" (UID: \"af61bda0-c7b4-489d-a671-eaa5299942fe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" Feb 17 15:15:39.481078 master-0 kubenswrapper[26425]: I0217 15:15:39.481029 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:15:39.482338 master-0 kubenswrapper[26425]: I0217 15:15:39.481258 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7zh\" (UniqueName: \"kubernetes.io/projected/65d9f008-7777-48fe-85fe-9d54a7bbcea9-kube-api-access-9g7zh\") pod \"service-ca-operator-5dc4688546-sg75p\" (UID: \"65d9f008-7777-48fe-85fe-9d54a7bbcea9\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" Feb 17 15:15:39.482338 master-0 kubenswrapper[26425]: I0217 15:15:39.481258 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6dc\" (UniqueName: \"kubernetes.io/projected/fc76384d-b288-4d30-bc77-f696b62a5f30-kube-api-access-lw6dc\") pod \"dns-operator-86b8869b79-lmqrr\" (UID: \"fc76384d-b288-4d30-bc77-f696b62a5f30\") " pod="openshift-dns-operator/dns-operator-86b8869b79-lmqrr" Feb 17 15:15:39.482338 master-0 kubenswrapper[26425]: I0217 15:15:39.481834 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22a30079-d7fc-49cf-882e-1c5022cb5bf6-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.483672 master-0 kubenswrapper[26425]: I0217 15:15:39.483632 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:15:39.484050 master-0 kubenswrapper[26425]: I0217 15:15:39.484012 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 17 15:15:39.484599 master-0 kubenswrapper[26425]: I0217 15:15:39.484563 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 17 15:15:39.484930 master-0 kubenswrapper[26425]: I0217 15:15:39.484891 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xbnc\" (UniqueName: \"kubernetes.io/projected/c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda-kube-api-access-8xbnc\") pod \"openshift-controller-manager-operator-5f5f84757d-dsfkk\" (UID: \"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" Feb 17 15:15:39.485298 master-0 kubenswrapper[26425]: I0217 15:15:39.485269 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:15:39.486622 master-0 kubenswrapper[26425]: I0217 15:15:39.486585 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.486851 master-0 kubenswrapper[26425]: I0217 15:15:39.486738 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 17 15:15:39.487013 master-0 kubenswrapper[26425]: I0217 15:15:39.486806 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq" Feb 17 15:15:39.488151 master-0 kubenswrapper[26425]: I0217 15:15:39.488073 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22a30079-d7fc-49cf-882e-1c5022cb5bf6-trusted-ca\") pod \"ingress-operator-c588d8cb4-nclxg\" (UID: \"22a30079-d7fc-49cf-882e-1c5022cb5bf6\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" Feb 17 15:15:39.491216 master-0 kubenswrapper[26425]: I0217 15:15:39.491186 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/187af679-a062-4f41-81f2-33545f76febf-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.495543 master-0 kubenswrapper[26425]: I0217 15:15:39.495505 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:15:39.497357 master-0 kubenswrapper[26425]: I0217 15:15:39.497317 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rddwz\" (UniqueName: \"kubernetes.io/projected/6c734c89-515e-4ff0-82d1-831ddaf0b99e-kube-api-access-rddwz\") pod \"cluster-olm-operator-55b69c6c48-mzk89\" (UID: \"6c734c89-515e-4ff0-82d1-831ddaf0b99e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" Feb 17 15:15:39.516699 master-0 kubenswrapper[26425]: I0217 15:15:39.516652 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.516699 master-0 kubenswrapper[26425]: I0217 15:15:39.516701 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:39.516699 master-0 kubenswrapper[26425]: I0217 15:15:39.516722 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.516745 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.516803 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.516826 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghlk\" (UniqueName: \"kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.516934 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.516956 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-tuned\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517042 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517091 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517256 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-env-overrides\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517261 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517346 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lwz4\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517419 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-tmp\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517478 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8s7\" (UniqueName: \"kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517502 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.517512 master-0 kubenswrapper[26425]: I0217 15:15:39.517521 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517540 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27gfx\" (UniqueName: \"kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517562 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517583 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmdw\" (UniqueName: \"kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517605 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517621 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517640 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517659 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517676 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517694 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517734 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmp42\" (UniqueName: \"kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517756 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517773 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517784 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-srv-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517788 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517869 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517874 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68954d1e-2147-4465-9817-a3c04cbc19b0-cache\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.517892 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:39.518149 master-0 kubenswrapper[26425]: I0217 15:15:39.518028 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518185 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518293 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b167b7b-2280-4c82-ac78-71c57aebe503-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518297 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518336 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8jf\" (UniqueName: \"kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518358 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518377 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518396 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518472 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518499 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518520 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klfm5\" (UniqueName: \"kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5\") pod \"migrator-5bd989df77-hrl5d\" (UID: \"52b28595-f0fc-49e2-9c95-43e5f1eb003f\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518539 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518556 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518574 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518592 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518609 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb9r\" (UniqueName: \"kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r\") pod \"network-check-source-7d8f4c8c66-fc8n7\" (UID: \"d973c9bc-8097-489c-9b8b-70b775177c41\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518638 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518658 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518661 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-catalog-content\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518675 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518699 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518718 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518733 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518878 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-utilities\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518557 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518963 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.518984 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.519003 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.518999 master-0 kubenswrapper[26425]: I0217 15:15:39.519024 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519050 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrph\" (UniqueName: \"kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519080 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519105 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519123 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519140 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519138 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6d23570-21d6-4b08-83fc-8b0827c25313-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.518963 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-serving-cert\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519243 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519267 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519287 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519305 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519326 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519343 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519375 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519390 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519405 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519423 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519441 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519485 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519493 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519506 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519524 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519541 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519560 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519579 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519628 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519646 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519647 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/50c51fe2-32aa-430f-8da0-7cf3b9519131-cache\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519663 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519691 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519708 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519736 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.519897 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/632fa4c3-b717-432c-8c5f-8d809f69c48b-iptables-alerter-script\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520013 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520053 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520074 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520091 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520107 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-profile-collector-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520137 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520263 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520281 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520276 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553d4535-9985-47e2-83ee-8fcfb6035e7b-config\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520319 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-ovnkube-identity-cm\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520563 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520826 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-catalog-content\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520888 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520919 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.520980 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521004 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521020 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521041 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521060 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521075 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521093 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbmb9\" (UniqueName: \"kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9\") pod \"csi-snapshot-controller-74b6595c6d-q4766\" (UID: \"129dba1e-73df-4ea4-96c0-3eba78d568ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521109 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521127 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521145 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521164 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521167 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-etcd-client\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521180 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521234 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521624 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521654 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521935 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.521970 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08e27254-e906-484a-b346-036f898be3ae-profile-collector-cert\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522024 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522114 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g48f\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522137 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522161 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522177 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522194 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522213 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522217 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-whereabouts-configmap\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522234 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522254 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522272 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522320 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522340 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522358 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522376 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522390 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522405 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522423 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522441 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522441 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/071566ae-a9ae-4aa9-9dc3-38602363be72-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522480 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522517 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522535 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522550 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cni-binary-copy\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522570 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnhjw\" (UniqueName: \"kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522603 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522680 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522788 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.522973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.523048 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwptc\" (UniqueName: \"kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:39.523423 master-0 kubenswrapper[26425]: I0217 15:15:39.523542 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.523663 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-utilities\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.523718 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.523747 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.523955 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.523986 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524008 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524031 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524061 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524174 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524214 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524231 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc216ba1-144a-4cc8-93db-85ab558a166a-utilities\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524241 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524312 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524342 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524370 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524392 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524414 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524429 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-config\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524440 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524494 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524522 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524550 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524583 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswxb\" (UniqueName: \"kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524605 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524628 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524646 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb153362-0abb-4aad-8975-532f6e72d032-cni-binary-copy\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524660 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524685 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.524759 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525048 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-catalog-content\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525078 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525112 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525142 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525168 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525233 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833c8661-28ca-463a-ac61-6edb961056e3-catalog-content\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525344 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525345 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525437 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525506 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525581 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525604 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525656 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525781 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525937 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-daemon-config\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.525964 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526015 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4fd2c79d-1e10-4f09-8a33-c66598abc99a-metrics-tls\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526036 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526104 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526114 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526147 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071566ae-a9ae-4aa9-9dc3-38602363be72-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526188 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526405 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526473 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b167b7b-2280-4c82-ac78-71c57aebe503-config\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526572 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526600 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526645 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526670 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526690 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526753 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526770 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526811 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526819 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-env-overrides\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526828 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526856 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526862 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-webhook-cert\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526905 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526926 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.526945 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527006 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527172 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527196 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2tcz\" (UniqueName: \"kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527262 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553d4535-9985-47e2-83ee-8fcfb6035e7b-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527304 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-utilities\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527302 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/257db04b-7203-4a1d-b3d4-bd4db258a3cc-srv-cert\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.527320 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-config\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:39.531806 master-0 kubenswrapper[26425]: I0217 15:15:39.531509 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzlr\" (UniqueName: \"kubernetes.io/projected/e9b3f722-fb34-4ff5-b28b-fc24f43d85ae-kube-api-access-7nzlr\") pod \"authentication-operator-755d954778-jrdqm\" (UID: \"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae\") " pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:15:39.545385 master-0 kubenswrapper[26425]: I0217 15:15:39.545281 26425 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 17 15:15:39.556580 master-0 kubenswrapper[26425]: I0217 15:15:39.556518 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/187af679-a062-4f41-81f2-33545f76febf-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-dtwmd\" (UID: \"187af679-a062-4f41-81f2-33545f76febf\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" Feb 17 15:15:39.557380 master-0 kubenswrapper[26425]: I0217 15:15:39.557341 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:15:39.567324 master-0 kubenswrapper[26425]: I0217 15:15:39.567263 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31e31afc-79d5-46f4-9835-0fd11da9465f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.575168 master-0 kubenswrapper[26425]: I0217 15:15:39.575141 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:15:39.580908 master-0 kubenswrapper[26425]: I0217 15:15:39.580841 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-config\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.585336 master-0 kubenswrapper[26425]: I0217 15:15:39.585301 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31e31afc-79d5-46f4-9835-0fd11da9465f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:39.601215 master-0 kubenswrapper[26425]: I0217 15:15:39.601166 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:15:39.606132 master-0 kubenswrapper[26425]: I0217 15:15:39.606087 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a905fb6-17d4-413b-9107-859c804ce906-ovnkube-script-lib\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.615666 master-0 kubenswrapper[26425]: I0217 15:15:39.615638 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 17 15:15:39.627982 master-0 kubenswrapper[26425]: I0217 15:15:39.627942 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.627998 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnxm\" (UniqueName: \"kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.628027 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.628055 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.628084 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.628108 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:39.628139 master-0 kubenswrapper[26425]: I0217 15:15:39.628130 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628152 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628175 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628199 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wh2\" (UniqueName: \"kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628226 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628257 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628287 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628316 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.628393 master-0 kubenswrapper[26425]: I0217 15:15:39.628367 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628447 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628487 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628509 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628570 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf69t\" (UniqueName: \"kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628598 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628622 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628670 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ckv\" (UniqueName: \"kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628707 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5jv\" (UniqueName: \"kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:15:39.628732 master-0 kubenswrapper[26425]: I0217 15:15:39.628730 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628758 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628804 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhm88\" (UniqueName: \"kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628830 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628851 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628874 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628906 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628929 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.628973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.629009 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.629031 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.629056 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.629081 master-0 kubenswrapper[26425]: I0217 15:15:39.629076 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629099 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629122 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629170 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629194 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629214 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629239 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629262 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629284 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629312 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629339 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629361 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629390 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629412 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629449 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629490 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629542 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:39.629573 master-0 kubenswrapper[26425]: I0217 15:15:39.629575 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629600 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629635 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsd7\" (UniqueName: \"kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629667 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629688 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629712 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcj2\" (UniqueName: \"kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.629903 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8385a176-0e12-47ef-862e-8331e6734b9c-snapshots\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.630220 master-0 kubenswrapper[26425]: I0217 15:15:39.630070 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-etc-kubernetes\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.630504 master-0 kubenswrapper[26425]: I0217 15:15:39.630430 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-var-lib-kubelet\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.630687 master-0 kubenswrapper[26425]: I0217 15:15:39.630655 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-socket-dir-parent\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.630864 master-0 kubenswrapper[26425]: I0217 15:15:39.630839 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b58e9d93-7683-440d-a603-9543e5455490-tmpfs\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:39.630864 master-0 kubenswrapper[26425]: I0217 15:15:39.630844 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-sys\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.631025 master-0 kubenswrapper[26425]: I0217 15:15:39.630994 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-netd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.631071 master-0 kubenswrapper[26425]: I0217 15:15:39.631018 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-system-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.631226 master-0 kubenswrapper[26425]: I0217 15:15:39.631104 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.631226 master-0 kubenswrapper[26425]: I0217 15:15:39.631132 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.631226 master-0 kubenswrapper[26425]: I0217 15:15:39.631133 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-cni-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.631226 master-0 kubenswrapper[26425]: I0217 15:15:39.631193 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/632fa4c3-b717-432c-8c5f-8d809f69c48b-host-slash\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:39.631393 master-0 kubenswrapper[26425]: I0217 15:15:39.631236 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysconfig\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.631393 master-0 kubenswrapper[26425]: I0217 15:15:39.631269 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-systemd\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.631393 master-0 kubenswrapper[26425]: I0217 15:15:39.631382 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pn82\" (UniqueName: \"kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:39.631533 master-0 kubenswrapper[26425]: I0217 15:15:39.631391 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-etc-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.631533 master-0 kubenswrapper[26425]: I0217 15:15:39.631409 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:39.631533 master-0 kubenswrapper[26425]: I0217 15:15:39.631031 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-multus-conf-dir\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.631703 master-0 kubenswrapper[26425]: I0217 15:15:39.631652 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.631703 master-0 kubenswrapper[26425]: I0217 15:15:39.631687 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-netns\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.631746 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.631878 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-var-lib-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.631916 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.631937 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.631994 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-k8s-cni-cncf-io\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.632042 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:15:39.632154 master-0 kubenswrapper[26425]: I0217 15:15:39.632092 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa267e55-eef2-447f-b2ff-57c1ec2930be-hosts-file\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:15:39.632473 master-0 kubenswrapper[26425]: I0217 15:15:39.632181 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/50c51fe2-32aa-430f-8da0-7cf3b9519131-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.632473 master-0 kubenswrapper[26425]: I0217 15:15:39.632224 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.632473 master-0 kubenswrapper[26425]: I0217 15:15:39.632305 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-kubernetes\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.632473 master-0 kubenswrapper[26425]: I0217 15:15:39.632347 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.632473 master-0 kubenswrapper[26425]: I0217 15:15:39.632428 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.632662 master-0 kubenswrapper[26425]: I0217 15:15:39.632490 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.632662 master-0 kubenswrapper[26425]: I0217 15:15:39.632550 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-node-pullsecrets\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.632662 master-0 kubenswrapper[26425]: I0217 15:15:39.632568 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.632662 master-0 kubenswrapper[26425]: I0217 15:15:39.632607 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.632819 master-0 kubenswrapper[26425]: I0217 15:15:39.632716 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.632908 master-0 kubenswrapper[26425]: I0217 15:15:39.632853 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.632953 master-0 kubenswrapper[26425]: I0217 15:15:39.632910 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.632953 master-0 kubenswrapper[26425]: I0217 15:15:39.632930 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.633033 master-0 kubenswrapper[26425]: I0217 15:15:39.632959 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.633033 master-0 kubenswrapper[26425]: I0217 15:15:39.632981 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633033 master-0 kubenswrapper[26425]: I0217 15:15:39.633009 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:39.633033 master-0 kubenswrapper[26425]: I0217 15:15:39.633033 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.633188 master-0 kubenswrapper[26425]: I0217 15:15:39.633035 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-openvswitch\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.633188 master-0 kubenswrapper[26425]: I0217 15:15:39.633067 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.633188 master-0 kubenswrapper[26425]: I0217 15:15:39.633105 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-multus\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633188 master-0 kubenswrapper[26425]: I0217 15:15:39.633111 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.633338 master-0 kubenswrapper[26425]: I0217 15:15:39.633235 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.633338 master-0 kubenswrapper[26425]: I0217 15:15:39.633300 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:39.633418 master-0 kubenswrapper[26425]: I0217 15:15:39.633308 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4fd2c79d-1e10-4f09-8a33-c66598abc99a-host-etc-kube\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:39.633418 master-0 kubenswrapper[26425]: I0217 15:15:39.633365 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633520 master-0 kubenswrapper[26425]: I0217 15:15:39.633425 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.633520 master-0 kubenswrapper[26425]: I0217 15:15:39.633480 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-run-multus-certs\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633605 master-0 kubenswrapper[26425]: I0217 15:15:39.633531 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633605 master-0 kubenswrapper[26425]: I0217 15:15:39.633564 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-cnibin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.633605 master-0 kubenswrapper[26425]: I0217 15:15:39.633561 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.633722 master-0 kubenswrapper[26425]: I0217 15:15:39.633609 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.633722 master-0 kubenswrapper[26425]: I0217 15:15:39.633671 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:39.633804 master-0 kubenswrapper[26425]: I0217 15:15:39.633727 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.633804 master-0 kubenswrapper[26425]: I0217 15:15:39.633783 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.633889 master-0 kubenswrapper[26425]: I0217 15:15:39.633859 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.633933 master-0 kubenswrapper[26425]: I0217 15:15:39.633897 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-cnibin\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.633975 master-0 kubenswrapper[26425]: I0217 15:15:39.633926 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:39.633975 master-0 kubenswrapper[26425]: I0217 15:15:39.633953 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.634061 master-0 kubenswrapper[26425]: I0217 15:15:39.633999 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.634108 master-0 kubenswrapper[26425]: I0217 15:15:39.634058 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:39.634149 master-0 kubenswrapper[26425]: I0217 15:15:39.634116 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:39.634281 master-0 kubenswrapper[26425]: I0217 15:15:39.634244 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:39.634402 master-0 kubenswrapper[26425]: I0217 15:15:39.634370 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.634519 master-0 kubenswrapper[26425]: I0217 15:15:39.634436 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6jm\" (UniqueName: \"kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:39.634605 master-0 kubenswrapper[26425]: I0217 15:15:39.634569 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:39.634605 master-0 kubenswrapper[26425]: I0217 15:15:39.634449 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/626c4f7a-59ee-45da-9198-05dd2c42ac42-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:39.634819 master-0 kubenswrapper[26425]: I0217 15:15:39.634783 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.634895 master-0 kubenswrapper[26425]: I0217 15:15:39.634857 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:39.634945 master-0 kubenswrapper[26425]: I0217 15:15:39.634874 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-cni-bin\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.634990 master-0 kubenswrapper[26425]: I0217 15:15:39.634934 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:39.635030 master-0 kubenswrapper[26425]: I0217 15:15:39.635008 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.635076 master-0 kubenswrapper[26425]: I0217 15:15:39.635064 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-systemd\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.635117 master-0 kubenswrapper[26425]: I0217 15:15:39.635091 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:39.635190 master-0 kubenswrapper[26425]: I0217 15:15:39.635152 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.635272 master-0 kubenswrapper[26425]: I0217 15:15:39.635238 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:39.635320 master-0 kubenswrapper[26425]: I0217 15:15:39.635291 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.635362 master-0 kubenswrapper[26425]: I0217 15:15:39.635328 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.635408 master-0 kubenswrapper[26425]: I0217 15:15:39.635387 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.635527 master-0 kubenswrapper[26425]: I0217 15:15:39.635411 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-run\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.635527 master-0 kubenswrapper[26425]: I0217 15:15:39.635426 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lw7x\" (UniqueName: \"kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.635615 master-0 kubenswrapper[26425]: I0217 15:15:39.635526 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.635615 master-0 kubenswrapper[26425]: I0217 15:15:39.635550 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5w6f\" (UniqueName: \"kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.635615 master-0 kubenswrapper[26425]: I0217 15:15:39.635590 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:39.635615 master-0 kubenswrapper[26425]: I0217 15:15:39.635612 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.635767 master-0 kubenswrapper[26425]: I0217 15:15:39.635723 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:39.635959 master-0 kubenswrapper[26425]: I0217 15:15:39.635923 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-hostroot\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.636008 master-0 kubenswrapper[26425]: I0217 15:15:39.635975 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.636171 master-0 kubenswrapper[26425]: I0217 15:15:39.636042 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.636171 master-0 kubenswrapper[26425]: I0217 15:15:39.636068 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.636171 master-0 kubenswrapper[26425]: I0217 15:15:39.636108 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx29\" (UniqueName: \"kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:39.636171 master-0 kubenswrapper[26425]: I0217 15:15:39.636122 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-textfile\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636196 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636255 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636260 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-os-release\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636292 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636309 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-node-log\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636322 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.636354 master-0 kubenswrapper[26425]: I0217 15:15:39.636353 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636365 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-sysctl-conf\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636377 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636410 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636429 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d481a79-f565-4c7f-84cc-207fc3117c23-audit-dir\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636490 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-kubelet\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636525 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636558 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636584 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2mb\" (UniqueName: \"kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.636660 master-0 kubenswrapper[26425]: I0217 15:15:39.636636 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2d4n\" (UniqueName: \"kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636777 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636831 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636870 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636894 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.636999 master-0 kubenswrapper[26425]: I0217 15:15:39.636946 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-kubelet\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.636979 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637037 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637048 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-lib-modules\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637082 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-slash\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637092 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-etc-modprobe-d\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637126 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrmf\" (UniqueName: \"kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637155 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637186 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.637229 master-0 kubenswrapper[26425]: I0217 15:15:39.637213 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637247 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637296 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637324 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fj8w\" (UniqueName: \"kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637336 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-log-socket\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637354 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637381 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637406 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-host-run-netns\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637438 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-host\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637448 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637522 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-host-var-lib-cni-bin\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637530 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.637571 master-0 kubenswrapper[26425]: I0217 15:15:39.637556 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-systemd-units\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637583 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637613 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637636 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637642 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637692 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a905fb6-17d4-413b-9107-859c804ce906-run-ovn\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637692 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb153362-0abb-4aad-8975-532f6e72d032-system-cni-dir\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637791 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637664 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-dir\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637953 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.637979 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.638000 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.638146 master-0 kubenswrapper[26425]: I0217 15:15:39.638025 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:39.638643 master-0 kubenswrapper[26425]: I0217 15:15:39.638158 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/68954d1e-2147-4465-9817-a3c04cbc19b0-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.638643 master-0 kubenswrapper[26425]: I0217 15:15:39.638258 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-os-release\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:39.641614 master-0 kubenswrapper[26425]: I0217 15:15:39.641577 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 17 15:15:39.649725 master-0 kubenswrapper[26425]: I0217 15:15:39.649686 26425 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:15:39.661266 master-0 kubenswrapper[26425]: I0217 15:15:39.656236 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 17 15:15:39.661266 master-0 kubenswrapper[26425]: I0217 15:15:39.660123 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 17 15:15:39.661266 master-0 kubenswrapper[26425]: I0217 15:15:39.660176 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 17 15:15:39.661266 master-0 kubenswrapper[26425]: I0217 15:15:39.660188 26425 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 17 15:15:39.661266 master-0 kubenswrapper[26425]: I0217 15:15:39.660446 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:39.662253 master-0 kubenswrapper[26425]: I0217 15:15:39.661766 26425 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 17 15:15:39.665090 master-0 kubenswrapper[26425]: I0217 15:15:39.665014 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.677082 master-0 kubenswrapper[26425]: I0217 15:15:39.677045 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:15:39.678380 master-0 kubenswrapper[26425]: I0217 15:15:39.678332 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.679554 master-0 kubenswrapper[26425]: I0217 15:15:39.679511 26425 scope.go:117] "RemoveContainer" containerID="e6e0c56b68d88e13c98f68fd19514701fbb95e0c18c904b865481a0f5ad00f23" Feb 17 15:15:39.680234 master-0 kubenswrapper[26425]: I0217 15:15:39.680198 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a905fb6-17d4-413b-9107-859c804ce906-ovn-node-metrics-cert\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:39.697371 master-0 kubenswrapper[26425]: I0217 15:15:39.697308 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 17 15:15:39.705593 master-0 kubenswrapper[26425]: I0217 15:15:39.705540 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/68954d1e-2147-4465-9817-a3c04cbc19b0-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.715870 master-0 kubenswrapper[26425]: I0217 15:15:39.715816 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 17 15:15:39.737367 master-0 kubenswrapper[26425]: I0217 15:15:39.737287 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 17 15:15:39.739728 master-0 kubenswrapper[26425]: I0217 15:15:39.739615 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.740000 master-0 kubenswrapper[26425]: I0217 15:15:39.739956 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.740137 master-0 kubenswrapper[26425]: I0217 15:15:39.740074 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.740295 master-0 kubenswrapper[26425]: I0217 15:15:39.740247 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.741432 master-0 kubenswrapper[26425]: I0217 15:15:39.741396 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.741612 master-0 kubenswrapper[26425]: I0217 15:15:39.741583 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.741766 master-0 kubenswrapper[26425]: I0217 15:15:39.741626 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.741766 master-0 kubenswrapper[26425]: I0217 15:15:39.741697 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.742417 master-0 kubenswrapper[26425]: I0217 15:15:39.742014 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.742417 master-0 kubenswrapper[26425]: I0217 15:15:39.742212 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.742417 master-0 kubenswrapper[26425]: I0217 15:15:39.742225 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:39.742417 master-0 kubenswrapper[26425]: I0217 15:15:39.742278 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-sys\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.742417 master-0 kubenswrapper[26425]: I0217 15:15:39.742345 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14723cb7-2d96-42b7-b559-70386c4c841c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:39.742763 master-0 kubenswrapper[26425]: I0217 15:15:39.742428 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.742763 master-0 kubenswrapper[26425]: I0217 15:15:39.742393 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:39.742893 master-0 kubenswrapper[26425]: I0217 15:15:39.742808 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.742957 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743021 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743235 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-root\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743287 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-wtmp\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743343 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2102e834-2b36-49de-a99e-c2dbe64d722f-rootfs\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743604 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.743984 master-0 kubenswrapper[26425]: I0217 15:15:39.743695 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:39.780552 master-0 kubenswrapper[26425]: I0217 15:15:39.780507 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:15:39.783569 master-0 kubenswrapper[26425]: I0217 15:15:39.783154 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 17 15:15:39.787607 master-0 kubenswrapper[26425]: I0217 15:15:39.787567 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fce9579e-7383-421e-95dd-8f8b786817f9-metrics-certs\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:15:39.793074 master-0 kubenswrapper[26425]: I0217 15:15:39.793034 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:39.795510 master-0 kubenswrapper[26425]: I0217 15:15:39.795423 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:15:39.798981 master-0 kubenswrapper[26425]: I0217 15:15:39.798946 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-encryption-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.815837 master-0 kubenswrapper[26425]: I0217 15:15:39.815800 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:15:39.824980 master-0 kubenswrapper[26425]: I0217 15:15:39.824917 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-client\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.836427 master-0 kubenswrapper[26425]: I0217 15:15:39.836377 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:15:39.841279 master-0 kubenswrapper[26425]: I0217 15:15:39.841237 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-serving-cert\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.845498 master-0 kubenswrapper[26425]: I0217 15:15:39.845392 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") pod \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " Feb 17 15:15:39.845498 master-0 kubenswrapper[26425]: I0217 15:15:39.845483 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d3daf534-9a77-49c6-964f-d402c5d5a2ac" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:39.845711 master-0 kubenswrapper[26425]: I0217 15:15:39.845670 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") pod \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " Feb 17 15:15:39.847406 master-0 kubenswrapper[26425]: I0217 15:15:39.847314 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "d3daf534-9a77-49c6-964f-d402c5d5a2ac" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:39.848034 master-0 kubenswrapper[26425]: I0217 15:15:39.847992 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:39.848103 master-0 kubenswrapper[26425]: I0217 15:15:39.848036 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:39.855643 master-0 kubenswrapper[26425]: I0217 15:15:39.855570 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:15:39.865818 master-0 kubenswrapper[26425]: I0217 15:15:39.865773 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-client\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.877810 master-0 kubenswrapper[26425]: I0217 15:15:39.877730 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:15:39.886113 master-0 kubenswrapper[26425]: I0217 15:15:39.885917 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d481a79-f565-4c7f-84cc-207fc3117c23-serving-cert\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:39.896202 master-0 kubenswrapper[26425]: I0217 15:15:39.896180 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:15:39.899630 master-0 kubenswrapper[26425]: I0217 15:15:39.899547 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-etcd-serving-ca\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.920190 master-0 kubenswrapper[26425]: I0217 15:15:39.920115 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:15:39.926857 master-0 kubenswrapper[26425]: I0217 15:15:39.926818 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/124ba199-b79a-4e5c-8512-cc0ae50f73c8-encryption-config\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.936572 master-0 kubenswrapper[26425]: I0217 15:15:39.936522 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:15:39.943268 master-0 kubenswrapper[26425]: I0217 15:15:39.943219 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-audit-policies\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.956474 master-0 kubenswrapper[26425]: I0217 15:15:39.956424 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:15:39.975523 master-0 kubenswrapper[26425]: I0217 15:15:39.975420 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:15:39.982171 master-0 kubenswrapper[26425]: I0217 15:15:39.982138 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124ba199-b79a-4e5c-8512-cc0ae50f73c8-trusted-ca-bundle\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:39.996552 master-0 kubenswrapper[26425]: I0217 15:15:39.996516 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:15:40.016424 master-0 kubenswrapper[26425]: I0217 15:15:40.016385 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:15:40.022979 master-0 kubenswrapper[26425]: I0217 15:15:40.022927 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-image-import-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:40.035676 master-0 kubenswrapper[26425]: I0217 15:15:40.035603 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:15:40.037234 master-0 kubenswrapper[26425]: I0217 15:15:40.037193 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-config\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:40.062381 master-0 kubenswrapper[26425]: I0217 15:15:40.062254 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:15:40.062800 master-0 kubenswrapper[26425]: I0217 15:15:40.062745 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-trusted-ca-bundle\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:40.072032 master-0 kubenswrapper[26425]: I0217 15:15:40.071951 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:40.076206 master-0 kubenswrapper[26425]: I0217 15:15:40.076158 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:15:40.096507 master-0 kubenswrapper[26425]: I0217 15:15:40.096448 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:15:40.116827 master-0 kubenswrapper[26425]: I0217 15:15:40.116786 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:15:40.120081 master-0 kubenswrapper[26425]: I0217 15:15:40.120034 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-audit\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:40.136897 master-0 kubenswrapper[26425]: I0217 15:15:40.136850 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:15:40.146035 master-0 kubenswrapper[26425]: I0217 15:15:40.145983 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d481a79-f565-4c7f-84cc-207fc3117c23-etcd-serving-ca\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:40.159469 master-0 kubenswrapper[26425]: I0217 15:15:40.159400 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:15:40.174987 master-0 kubenswrapper[26425]: I0217 15:15:40.174901 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:40.177233 master-0 kubenswrapper[26425]: I0217 15:15:40.177200 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:15:40.178476 master-0 kubenswrapper[26425]: I0217 15:15:40.178429 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:15:40.181555 master-0 kubenswrapper[26425]: I0217 15:15:40.181494 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-cabundle\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:40.197154 master-0 kubenswrapper[26425]: I0217 15:15:40.197111 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:15:40.220701 master-0 kubenswrapper[26425]: I0217 15:15:40.220637 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:15:40.240868 master-0 kubenswrapper[26425]: I0217 15:15:40.240803 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:15:40.249696 master-0 kubenswrapper[26425]: I0217 15:15:40.249614 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-signing-key\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:40.255555 master-0 kubenswrapper[26425]: I0217 15:15:40.255492 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:15:40.259876 master-0 kubenswrapper[26425]: I0217 15:15:40.259247 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d317dcb-ea6a-4066-b197-5ee960dec01a-config-volume\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:40.276818 master-0 kubenswrapper[26425]: I0217 15:15:40.276764 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:15:40.301479 master-0 kubenswrapper[26425]: I0217 15:15:40.301010 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:15:40.316564 master-0 kubenswrapper[26425]: I0217 15:15:40.316343 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:15:40.336871 master-0 kubenswrapper[26425]: I0217 15:15:40.336815 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:15:40.355916 master-0 kubenswrapper[26425]: I0217 15:15:40.355877 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:15:40.364223 master-0 kubenswrapper[26425]: I0217 15:15:40.364161 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d317dcb-ea6a-4066-b197-5ee960dec01a-metrics-tls\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:40.380149 master-0 kubenswrapper[26425]: I0217 15:15:40.380069 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:15:40.388037 master-0 kubenswrapper[26425]: I0217 15:15:40.387995 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d6e329-7ad8-4fc2-accc-66827f11743d-service-ca-bundle\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:40.404901 master-0 kubenswrapper[26425]: I0217 15:15:40.404742 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:15:40.405722 master-0 kubenswrapper[26425]: I0217 15:15:40.405678 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-default-certificate\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:40.418241 master-0 kubenswrapper[26425]: I0217 15:15:40.418198 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:15:40.420308 master-0 kubenswrapper[26425]: I0217 15:15:40.420171 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 17 15:15:40.425378 master-0 kubenswrapper[26425]: I0217 15:15:40.425213 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:40.436674 master-0 kubenswrapper[26425]: I0217 15:15:40.436623 26425 request.go:700] Waited for 1.012514082s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 17 15:15:40.437932 master-0 kubenswrapper[26425]: I0217 15:15:40.437895 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:15:40.470892 master-0 kubenswrapper[26425]: I0217 15:15:40.470847 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:15:40.473381 master-0 kubenswrapper[26425]: I0217 15:15:40.473364 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:40.482506 master-0 kubenswrapper[26425]: I0217 15:15:40.482098 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:15:40.493134 master-0 kubenswrapper[26425]: I0217 15:15:40.492333 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:40.496554 master-0 kubenswrapper[26425]: I0217 15:15:40.496514 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:15:40.500237 master-0 kubenswrapper[26425]: I0217 15:15:40.500195 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-stats-auth\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:40.515584 master-0 kubenswrapper[26425]: I0217 15:15:40.515510 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:15:40.517069 master-0 kubenswrapper[26425]: E0217 15:15:40.516956 26425 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.517157 master-0 kubenswrapper[26425]: E0217 15:15:40.517092 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca podName:3db03cef-d297-4bf7-8e52-dd0b18882d07 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.017069933 +0000 UTC m=+2.908793821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca") pod "route-controller-manager-6978b88779-vp5tv" (UID: "3db03cef-d297-4bf7-8e52-dd0b18882d07") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.518650 26425 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.518769 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca podName:626c4f7a-59ee-45da-9198-05dd2c42ac42 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.018745665 +0000 UTC m=+2.910469493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca") pod "cluster-version-operator-649c4f5445-7kdb7" (UID: "626c4f7a-59ee-45da-9198-05dd2c42ac42") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521168 26425 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521216 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521275 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs podName:a2d6e329-7ad8-4fc2-accc-66827f11743d nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.021254218 +0000 UTC m=+2.912978036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs") pod "router-default-864ddd5f56-g8w2f" (UID: "a2d6e329-7ad8-4fc2-accc-66827f11743d") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521292 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates podName:d075439c-721d-432b-b4f9-9f078132bf92 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.021285228 +0000 UTC m=+2.913009046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-nm8rs" (UID: "d075439c-721d-432b-b4f9-9f078132bf92") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521379 26425 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522497 master-0 kubenswrapper[26425]: E0217 15:15:40.521520 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls podName:6b7d1adb-b23b-4702-be7d-27e818e8fd63 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.021496343 +0000 UTC m=+2.913220231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-hr9g4" (UID: "6b7d1adb-b23b-4702-be7d-27e818e8fd63") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522611 26425 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522667 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles podName:e6d0ea7a-6784-4c13-ad65-6c947dbcf136 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022653683 +0000 UTC m=+2.914377501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles") pod "controller-manager-b9c8fdfbc-rh9v2" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522704 26425 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522736 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls podName:b4422676-9a70-4973-8299-7b40a66e9c96 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022725054 +0000 UTC m=+2.914448872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-hmpc7" (UID: "b4422676-9a70-4973-8299-7b40a66e9c96") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522759 26425 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: I0217 15:15:40.522764 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522779 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022773315 +0000 UTC m=+2.914497133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522793 26425 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522820 26425 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522830 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert podName:c97d328c-95b6-4511-aa90-531ab42b9653 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022821687 +0000 UTC m=+2.914545505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-p8hbc" (UID: "c97d328c-95b6-4511-aa90-531ab42b9653") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522870 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert podName:626c4f7a-59ee-45da-9198-05dd2c42ac42 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022864208 +0000 UTC m=+2.914588016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert") pod "cluster-version-operator-649c4f5445-7kdb7" (UID: "626c4f7a-59ee-45da-9198-05dd2c42ac42") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.522939 master-0 kubenswrapper[26425]: E0217 15:15:40.522935 26425 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.523301 master-0 kubenswrapper[26425]: E0217 15:15:40.522989 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca podName:e6d0ea7a-6784-4c13-ad65-6c947dbcf136 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.022982792 +0000 UTC m=+2.914706610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca") pod "controller-manager-b9c8fdfbc-rh9v2" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.537755 master-0 kubenswrapper[26425]: I0217 15:15:40.536898 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 15:15:40.555704 master-0 kubenswrapper[26425]: I0217 15:15:40.555668 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:15:40.577991 master-0 kubenswrapper[26425]: I0217 15:15:40.577291 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:15:40.597479 master-0 kubenswrapper[26425]: I0217 15:15:40.596079 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:15:40.626516 master-0 kubenswrapper[26425]: I0217 15:15:40.624909 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:15:40.630667 master-0 kubenswrapper[26425]: E0217 15:15:40.630608 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.630927 master-0 kubenswrapper[26425]: E0217 15:15:40.630911 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.130860381 +0000 UTC m=+3.022584189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.631030 master-0 kubenswrapper[26425]: E0217 15:15:40.631018 26425 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631133 master-0 kubenswrapper[26425]: E0217 15:15:40.631124 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.131116497 +0000 UTC m=+3.022840315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631358 master-0 kubenswrapper[26425]: E0217 15:15:40.631347 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631436 master-0 kubenswrapper[26425]: E0217 15:15:40.631427 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.131418914 +0000 UTC m=+3.023142732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631545 master-0 kubenswrapper[26425]: E0217 15:15:40.631532 26425 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.631638 master-0 kubenswrapper[26425]: E0217 15:15:40.631627 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle podName:8385a176-0e12-47ef-862e-8331e6734b9c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.131617199 +0000 UTC m=+3.023341017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle") pod "insights-operator-cb4f7b4cf-cmbjq" (UID: "8385a176-0e12-47ef-862e-8331e6734b9c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.631714 master-0 kubenswrapper[26425]: E0217 15:15:40.631704 26425 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631798 master-0 kubenswrapper[26425]: E0217 15:15:40.631788 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls podName:da06cfcb-7c78-4022-96b1-d858853f5adc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.131780683 +0000 UTC m=+3.023504501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls") pod "machine-config-operator-84976bb859-kmc95" (UID: "da06cfcb-7c78-4022-96b1-d858853f5adc") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.631928 master-0 kubenswrapper[26425]: E0217 15:15:40.631877 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.631992 master-0 kubenswrapper[26425]: E0217 15:15:40.631920 26425 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632025 master-0 kubenswrapper[26425]: E0217 15:15:40.631998 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.131970318 +0000 UTC m=+3.023694306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632064 master-0 kubenswrapper[26425]: E0217 15:15:40.632026 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632064 master-0 kubenswrapper[26425]: E0217 15:15:40.631898 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632064 master-0 kubenswrapper[26425]: E0217 15:15:40.632055 26425 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632153 master-0 kubenswrapper[26425]: E0217 15:15:40.632029 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert podName:b58e9d93-7683-440d-a603-9543e5455490 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132015159 +0000 UTC m=+3.023739207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert") pod "packageserver-67d4dbd88b-szr25" (UID: "b58e9d93-7683-440d-a603-9543e5455490") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632153 master-0 kubenswrapper[26425]: E0217 15:15:40.632096 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca podName:cdbde712-c8dd-4011-adcb-af895abce94c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132086571 +0000 UTC m=+3.023810609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca") pod "openshift-state-metrics-546cc7d765-b4xl8" (UID: "cdbde712-c8dd-4011-adcb-af895abce94c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632153 master-0 kubenswrapper[26425]: E0217 15:15:40.632113 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap podName:9d97ff4f-48eb-4d9f-9d60-3e09f0bde040 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132104881 +0000 UTC m=+3.023828919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-z7lzs" (UID: "9d97ff4f-48eb-4d9f-9d60-3e09f0bde040") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632153 master-0 kubenswrapper[26425]: E0217 15:15:40.632133 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config podName:9d97ff4f-48eb-4d9f-9d60-3e09f0bde040 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132124301 +0000 UTC m=+3.023848319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-z7lzs" (UID: "9d97ff4f-48eb-4d9f-9d60-3e09f0bde040") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632196 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632233 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132216005 +0000 UTC m=+3.023939823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632247 26425 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632270 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert podName:7307f70e-ee5b-4f81-8155-718a02c9efe7 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132263086 +0000 UTC m=+3.023986904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert") pod "cluster-baremetal-operator-7bc947fc7d-8qkdw" (UID: "7307f70e-ee5b-4f81-8155-718a02c9efe7") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632284 26425 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632307 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert podName:ad81b5bd-2f97-4e7e-a12b-746998fa59f2 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132301817 +0000 UTC m=+3.024025635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-qbmw5" (UID: "ad81b5bd-2f97-4e7e-a12b-746998fa59f2") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632329 26425 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632349 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132343888 +0000 UTC m=+3.024067706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632361 26425 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632391 master-0 kubenswrapper[26425]: E0217 15:15:40.632380 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls podName:c435347a-ac01-46af-8192-9ef2d632bdfb nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132375109 +0000 UTC m=+3.024098927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls") pod "node-exporter-rttp2" (UID: "c435347a-ac01-46af-8192-9ef2d632bdfb") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632448 26425 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632487 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls podName:ba1306f7-029b-4d43-ba3c-5738da9148d6 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132481731 +0000 UTC m=+3.024205549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls") pod "machine-config-controller-686c884b4d-5q97f" (UID: "ba1306f7-029b-4d43-ba3c-5738da9148d6") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632508 26425 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632529 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132523872 +0000 UTC m=+3.024247690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632547 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632715 master-0 kubenswrapper[26425]: E0217 15:15:40.632566 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.132561563 +0000 UTC m=+3.024285381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.632942 master-0 kubenswrapper[26425]: E0217 15:15:40.632924 26425 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.633040 master-0 kubenswrapper[26425]: E0217 15:15:40.633023 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config podName:7307f70e-ee5b-4f81-8155-718a02c9efe7 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.133010264 +0000 UTC m=+3.024734252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config") pod "cluster-baremetal-operator-7bc947fc7d-8qkdw" (UID: "7307f70e-ee5b-4f81-8155-718a02c9efe7") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.633151 master-0 kubenswrapper[26425]: E0217 15:15:40.633137 26425 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.633435 master-0 kubenswrapper[26425]: E0217 15:15:40.633243 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls podName:7307f70e-ee5b-4f81-8155-718a02c9efe7 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.133234029 +0000 UTC m=+3.024957847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-8qkdw" (UID: "7307f70e-ee5b-4f81-8155-718a02c9efe7") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.633697 master-0 kubenswrapper[26425]: E0217 15:15:40.633649 26425 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.633784 master-0 kubenswrapper[26425]: E0217 15:15:40.633763 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls podName:9d97ff4f-48eb-4d9f-9d60-3e09f0bde040 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.133740233 +0000 UTC m=+3.025464251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-z7lzs" (UID: "9d97ff4f-48eb-4d9f-9d60-3e09f0bde040") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.633841 master-0 kubenswrapper[26425]: E0217 15:15:40.633796 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.633841 master-0 kubenswrapper[26425]: E0217 15:15:40.633842 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.133831615 +0000 UTC m=+3.025555653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.633955 master-0 kubenswrapper[26425]: E0217 15:15:40.633894 26425 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.633955 master-0 kubenswrapper[26425]: E0217 15:15:40.633931 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.133922357 +0000 UTC m=+3.025646405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634036 master-0 kubenswrapper[26425]: E0217 15:15:40.633966 26425 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634036 master-0 kubenswrapper[26425]: E0217 15:15:40.634023 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134011309 +0000 UTC m=+3.025735307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634036 master-0 kubenswrapper[26425]: E0217 15:15:40.634029 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634045 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634055 26425 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634069 26425 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634063 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.13405444 +0000 UTC m=+3.025778488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634021 26425 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634097 26425 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634120 master-0 kubenswrapper[26425]: E0217 15:15:40.634103 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls podName:2102e834-2b36-49de-a99e-c2dbe64d722f nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134093581 +0000 UTC m=+3.025817629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls") pod "machine-config-daemon-r6sfp" (UID: "2102e834-2b36-49de-a99e-c2dbe64d722f") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634136 26425 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634144 26425 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634147 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134138142 +0000 UTC m=+3.025862180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634086 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634193 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config podName:76d3da23-3347-4a5c-b328-d92671897ecc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134174903 +0000 UTC m=+3.025898951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config") pod "machine-approver-8569dd85ff-f9g8s" (UID: "76d3da23-3347-4a5c-b328-d92671897ecc") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634215 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert podName:8385a176-0e12-47ef-862e-8331e6734b9c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134207724 +0000 UTC m=+3.025931792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert") pod "insights-operator-cb4f7b4cf-cmbjq" (UID: "8385a176-0e12-47ef-862e-8331e6734b9c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634240 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config podName:cdbde712-c8dd-4011-adcb-af895abce94c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134222754 +0000 UTC m=+3.025946812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-b4xl8" (UID: "cdbde712-c8dd-4011-adcb-af895abce94c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634445 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134437049 +0000 UTC m=+3.026161097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634502 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images podName:655e4000-0ad4-4349-8c31-e0c952e4be30 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.13445317 +0000 UTC m=+3.026218099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images") pod "machine-api-operator-bd7dd5c46-g6fgz" (UID: "655e4000-0ad4-4349-8c31-e0c952e4be30") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634518 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134511211 +0000 UTC m=+3.026235259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634529 26425 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634530 26425 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634575 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config podName:ba1306f7-029b-4d43-ba3c-5738da9148d6 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134554412 +0000 UTC m=+3.026278230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-5q97f" (UID: "ba1306f7-029b-4d43-ba3c-5738da9148d6") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.634578 master-0 kubenswrapper[26425]: E0217 15:15:40.634599 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs podName:9768ef3d-4f12-4303-98cb-56f8ebe05039 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.134584913 +0000 UTC m=+3.026308931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs") pod "machine-config-server-l576h" (UID: "9768ef3d-4f12-4303-98cb-56f8ebe05039") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635112 master-0 kubenswrapper[26425]: E0217 15:15:40.635094 26425 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.635199 master-0 kubenswrapper[26425]: E0217 15:15:40.635189 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images podName:da06cfcb-7c78-4022-96b1-d858853f5adc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135179028 +0000 UTC m=+3.026902846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images") pod "machine-config-operator-84976bb859-kmc95" (UID: "da06cfcb-7c78-4022-96b1-d858853f5adc") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.635255 master-0 kubenswrapper[26425]: E0217 15:15:40.635214 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635336 master-0 kubenswrapper[26425]: E0217 15:15:40.635326 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135317532 +0000 UTC m=+3.027041350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635397 master-0 kubenswrapper[26425]: E0217 15:15:40.635283 26425 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635492 master-0 kubenswrapper[26425]: E0217 15:15:40.635480 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert podName:6d56f334-6c7b-4c92-9665-56300d44f9a3 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135472795 +0000 UTC m=+3.027196613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert") pod "ingress-canary-6bhf8" (UID: "6d56f334-6c7b-4c92-9665-56300d44f9a3") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635568 master-0 kubenswrapper[26425]: E0217 15:15:40.635531 26425 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.635656 master-0 kubenswrapper[26425]: E0217 15:15:40.635642 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135633419 +0000 UTC m=+3.027357427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.635730 master-0 kubenswrapper[26425]: E0217 15:15:40.635393 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635806 master-0 kubenswrapper[26425]: E0217 15:15:40.635795 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135788263 +0000 UTC m=+3.027512081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635893 master-0 kubenswrapper[26425]: E0217 15:15:40.635864 26425 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635933 master-0 kubenswrapper[26425]: E0217 15:15:40.635912 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert podName:c8646e5c-c2ce-48e6-b757-58044769f479 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135904366 +0000 UTC m=+3.027628184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert") pod "cluster-autoscaler-operator-67fd9768b5-6dzpr" (UID: "c8646e5c-c2ce-48e6-b757-58044769f479") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635933 master-0 kubenswrapper[26425]: E0217 15:15:40.635405 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635996 master-0 kubenswrapper[26425]: E0217 15:15:40.635951 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls podName:784b804f-6bcf-4cbd-a19e-9b1fa244354e nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135944047 +0000 UTC m=+3.027668065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-nzz2j" (UID: "784b804f-6bcf-4cbd-a19e-9b1fa244354e") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635996 master-0 kubenswrapper[26425]: E0217 15:15:40.635291 26425 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.635996 master-0 kubenswrapper[26425]: E0217 15:15:40.635988 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert podName:b58e9d93-7683-440d-a603-9543e5455490 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.135982368 +0000 UTC m=+3.027706186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert") pod "packageserver-67d4dbd88b-szr25" (UID: "b58e9d93-7683-440d-a603-9543e5455490") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.637043 master-0 kubenswrapper[26425]: E0217 15:15:40.636184 26425 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.637043 master-0 kubenswrapper[26425]: E0217 15:15:40.636238 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.136225384 +0000 UTC m=+3.027949202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.637043 master-0 kubenswrapper[26425]: I0217 15:15:40.636373 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:15:40.637232 master-0 kubenswrapper[26425]: E0217 15:15:40.637218 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.637321 master-0 kubenswrapper[26425]: E0217 15:15:40.637311 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca podName:9d97ff4f-48eb-4d9f-9d60-3e09f0bde040 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.137298921 +0000 UTC m=+3.029022739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca") pod "kube-state-metrics-7cc9598d54-z7lzs" (UID: "9d97ff4f-48eb-4d9f-9d60-3e09f0bde040") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639492 26425 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639527 26425 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639573 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139557077 +0000 UTC m=+3.031281015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639601 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token podName:9768ef3d-4f12-4303-98cb-56f8ebe05039 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139584478 +0000 UTC m=+3.031308486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token") pod "machine-config-server-l576h" (UID: "9768ef3d-4f12-4303-98cb-56f8ebe05039") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639622 26425 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639631 26425 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639654 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images podName:7307f70e-ee5b-4f81-8155-718a02c9efe7 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.13964636 +0000 UTC m=+3.031370398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images") pod "cluster-baremetal-operator-7bc947fc7d-8qkdw" (UID: "7307f70e-ee5b-4f81-8155-718a02c9efe7") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639667 26425 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639687 26425 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639699 26425 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639713 26425 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639729 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639767 26425 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639776 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639788 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639670 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config podName:da06cfcb-7c78-4022-96b1-d858853f5adc nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.13966178 +0000 UTC m=+3.031385838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config") pod "machine-config-operator-84976bb859-kmc95" (UID: "da06cfcb-7c78-4022-96b1-d858853f5adc") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639877 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls podName:cdbde712-c8dd-4011-adcb-af895abce94c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139865685 +0000 UTC m=+3.031589703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-b4xl8" (UID: "cdbde712-c8dd-4011-adcb-af895abce94c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639897 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs podName:75486ba2-6fde-456f-8846-2af67e58d585 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139890135 +0000 UTC m=+3.031614173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs") pod "multus-admission-controller-6d678b8d67-rzbff" (UID: "75486ba2-6fde-456f-8846-2af67e58d585") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639922 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config podName:c435347a-ac01-46af-8192-9ef2d632bdfb nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139914406 +0000 UTC m=+3.031638464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config") pod "node-exporter-rttp2" (UID: "c435347a-ac01-46af-8192-9ef2d632bdfb") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639939 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139931026 +0000 UTC m=+3.031655084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639958 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle podName:8385a176-0e12-47ef-862e-8331e6734b9c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139947897 +0000 UTC m=+3.031671945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-cmbjq" (UID: "8385a176-0e12-47ef-862e-8331e6734b9c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639972 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139964577 +0000 UTC m=+3.031688615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.639987 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca podName:c435347a-ac01-46af-8192-9ef2d632bdfb nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139980278 +0000 UTC m=+3.031704316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca") pod "node-exporter-rttp2" (UID: "c435347a-ac01-46af-8192-9ef2d632bdfb") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.642768 master-0 kubenswrapper[26425]: E0217 15:15:40.640003 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config podName:2102e834-2b36-49de-a99e-c2dbe64d722f nodeName:}" failed. No retries permitted until 2026-02-17 15:15:41.139995548 +0000 UTC m=+3.031719606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config") pod "machine-config-daemon-r6sfp" (UID: "2102e834-2b36-49de-a99e-c2dbe64d722f") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:40.662548 master-0 kubenswrapper[26425]: I0217 15:15:40.655862 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:15:40.675410 master-0 kubenswrapper[26425]: I0217 15:15:40.675361 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:15:40.675955 master-0 kubenswrapper[26425]: I0217 15:15:40.675930 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nclxg_22a30079-d7fc-49cf-882e-1c5022cb5bf6/ingress-operator/3.log" Feb 17 15:15:40.678947 master-0 kubenswrapper[26425]: I0217 15:15:40.676386 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:40.695442 master-0 kubenswrapper[26425]: I0217 15:15:40.695409 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:15:40.723787 master-0 kubenswrapper[26425]: I0217 15:15:40.723739 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:15:40.738061 master-0 kubenswrapper[26425]: I0217 15:15:40.738006 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-c8lzf" Feb 17 15:15:40.759517 master-0 kubenswrapper[26425]: I0217 15:15:40.758937 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dz667" Feb 17 15:15:40.775505 master-0 kubenswrapper[26425]: I0217 15:15:40.775435 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-t5n74" Feb 17 15:15:40.795669 master-0 kubenswrapper[26425]: I0217 15:15:40.795617 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-fg558" Feb 17 15:15:40.816201 master-0 kubenswrapper[26425]: I0217 15:15:40.816133 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:15:40.835584 master-0 kubenswrapper[26425]: I0217 15:15:40.835438 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:15:40.858565 master-0 kubenswrapper[26425]: I0217 15:15:40.858388 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:15:40.876660 master-0 kubenswrapper[26425]: I0217 15:15:40.876613 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:15:40.903057 master-0 kubenswrapper[26425]: I0217 15:15:40.902985 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:15:40.923522 master-0 kubenswrapper[26425]: I0217 15:15:40.920014 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-bw92c" Feb 17 15:15:40.935735 master-0 kubenswrapper[26425]: I0217 15:15:40.935685 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:15:40.961512 master-0 kubenswrapper[26425]: I0217 15:15:40.961237 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 17 15:15:40.976611 master-0 kubenswrapper[26425]: I0217 15:15:40.975584 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 17 15:15:40.999021 master-0 kubenswrapper[26425]: I0217 15:15:40.998956 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7f2w9" Feb 17 15:15:41.019833 master-0 kubenswrapper[26425]: I0217 15:15:41.019776 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 17 15:15:41.041981 master-0 kubenswrapper[26425]: I0217 15:15:41.041945 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-kcv7p" Feb 17 15:15:41.059768 master-0 kubenswrapper[26425]: I0217 15:15:41.059726 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 17 15:15:41.077128 master-0 kubenswrapper[26425]: I0217 15:15:41.077085 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:15:41.091358 master-0 kubenswrapper[26425]: I0217 15:15:41.091257 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:41.091614 master-0 kubenswrapper[26425]: I0217 15:15:41.091580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:41.091740 master-0 kubenswrapper[26425]: I0217 15:15:41.091712 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:41.091819 master-0 kubenswrapper[26425]: I0217 15:15:41.091795 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:15:41.091998 master-0 kubenswrapper[26425]: I0217 15:15:41.091968 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:15:41.092040 master-0 kubenswrapper[26425]: I0217 15:15:41.092021 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:41.092122 master-0 kubenswrapper[26425]: I0217 15:15:41.092101 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:41.092161 master-0 kubenswrapper[26425]: I0217 15:15:41.092135 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:15:41.092340 master-0 kubenswrapper[26425]: I0217 15:15:41.092322 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:41.092374 master-0 kubenswrapper[26425]: I0217 15:15:41.092348 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:41.092420 master-0 kubenswrapper[26425]: I0217 15:15:41.092403 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:41.092947 master-0 kubenswrapper[26425]: I0217 15:15:41.092920 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/626c4f7a-59ee-45da-9198-05dd2c42ac42-serving-cert\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:41.093120 master-0 kubenswrapper[26425]: I0217 15:15:41.093095 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:41.093251 master-0 kubenswrapper[26425]: I0217 15:15:41.093226 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/626c4f7a-59ee-45da-9198-05dd2c42ac42-service-ca\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:41.093411 master-0 kubenswrapper[26425]: I0217 15:15:41.093387 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2d6e329-7ad8-4fc2-accc-66827f11743d-metrics-certs\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:41.093587 master-0 kubenswrapper[26425]: I0217 15:15:41.093561 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d075439c-721d-432b-b4f9-9f078132bf92-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-nm8rs\" (UID: \"d075439c-721d-432b-b4f9-9f078132bf92\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:15:41.093847 master-0 kubenswrapper[26425]: I0217 15:15:41.093821 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97d328c-95b6-4511-aa90-531ab42b9653-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:41.094114 master-0 kubenswrapper[26425]: I0217 15:15:41.094082 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:41.094329 master-0 kubenswrapper[26425]: I0217 15:15:41.094300 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4422676-9a70-4973-8299-7b40a66e9c96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:15:41.094549 master-0 kubenswrapper[26425]: I0217 15:15:41.094520 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:41.094768 master-0 kubenswrapper[26425]: I0217 15:15:41.094738 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c97d328c-95b6-4511-aa90-531ab42b9653-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:41.097258 master-0 kubenswrapper[26425]: I0217 15:15:41.097222 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:15:41.103847 master-0 kubenswrapper[26425]: I0217 15:15:41.103808 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7d1adb-b23b-4702-be7d-27e818e8fd63-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:15:41.115561 master-0 kubenswrapper[26425]: I0217 15:15:41.115520 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtqvr" Feb 17 15:15:41.136263 master-0 kubenswrapper[26425]: I0217 15:15:41.136222 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:15:41.155711 master-0 kubenswrapper[26425]: I0217 15:15:41.155666 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 17 15:15:41.175849 master-0 kubenswrapper[26425]: I0217 15:15:41.175809 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dxkwv" Feb 17 15:15:41.193331 master-0 kubenswrapper[26425]: I0217 15:15:41.193266 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193375 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193404 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193445 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193476 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193496 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:41.193534 master-0 kubenswrapper[26425]: I0217 15:15:41.193514 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193549 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193599 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193626 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193644 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193665 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193686 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.193723 master-0 kubenswrapper[26425]: I0217 15:15:41.193725 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193754 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193785 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193808 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193835 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193860 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193878 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193898 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:41.193953 master-0 kubenswrapper[26425]: I0217 15:15:41.193947 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.193966 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.193985 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194007 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194026 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194050 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194067 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194098 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194116 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194144 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194161 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:41.194177 master-0 kubenswrapper[26425]: I0217 15:15:41.194179 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194205 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194233 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194252 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194285 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194317 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194345 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194407 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194425 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194443 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194480 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194503 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194539 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194597 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:41.194622 master-0 kubenswrapper[26425]: I0217 15:15:41.194623 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194666 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194693 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194711 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194747 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194764 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194781 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194820 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:41.195020 master-0 kubenswrapper[26425]: I0217 15:15:41.194844 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.196092 master-0 kubenswrapper[26425]: I0217 15:15:41.195396 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.197601 master-0 kubenswrapper[26425]: I0217 15:15:41.197576 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 17 15:15:41.206707 master-0 kubenswrapper[26425]: I0217 15:15:41.206686 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7307f70e-ee5b-4f81-8155-718a02c9efe7-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.215325 master-0 kubenswrapper[26425]: I0217 15:15:41.215306 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 17 15:15:41.226281 master-0 kubenswrapper[26425]: I0217 15:15:41.226231 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-config\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.237396 master-0 kubenswrapper[26425]: I0217 15:15:41.237345 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 17 15:15:41.248887 master-0 kubenswrapper[26425]: I0217 15:15:41.248854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7307f70e-ee5b-4f81-8155-718a02c9efe7-images\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:41.255717 master-0 kubenswrapper[26425]: I0217 15:15:41.255682 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 17 15:15:41.265436 master-0 kubenswrapper[26425]: I0217 15:15:41.265397 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8646e5c-c2ce-48e6-b757-58044769f479-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:41.276326 master-0 kubenswrapper[26425]: I0217 15:15:41.276279 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4h7qp" Feb 17 15:15:41.296832 master-0 kubenswrapper[26425]: I0217 15:15:41.296785 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 17 15:15:41.306539 master-0 kubenswrapper[26425]: I0217 15:15:41.306503 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8646e5c-c2ce-48e6-b757-58044769f479-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:41.341087 master-0 kubenswrapper[26425]: I0217 15:15:41.341030 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 17 15:15:41.346707 master-0 kubenswrapper[26425]: I0217 15:15:41.346609 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.355698 master-0 kubenswrapper[26425]: I0217 15:15:41.355667 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:15:41.377083 master-0 kubenswrapper[26425]: I0217 15:15:41.376985 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 17 15:15:41.396095 master-0 kubenswrapper[26425]: I0217 15:15:41.396038 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 17 15:15:41.416935 master-0 kubenswrapper[26425]: I0217 15:15:41.416886 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 17 15:15:41.426562 master-0 kubenswrapper[26425]: I0217 15:15:41.426520 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8385a176-0e12-47ef-862e-8331e6734b9c-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.435742 master-0 kubenswrapper[26425]: I0217 15:15:41.435702 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-7hvks" Feb 17 15:15:41.454847 master-0 kubenswrapper[26425]: I0217 15:15:41.454785 26425 request.go:700] Waited for 1.974722869s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Feb 17 15:15:41.457855 master-0 kubenswrapper[26425]: I0217 15:15:41.457810 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:15:41.466750 master-0 kubenswrapper[26425]: I0217 15:15:41.466703 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da06cfcb-7c78-4022-96b1-d858853f5adc-proxy-tls\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.476715 master-0 kubenswrapper[26425]: I0217 15:15:41.476660 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 17 15:15:41.486521 master-0 kubenswrapper[26425]: I0217 15:15:41.486445 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8385a176-0e12-47ef-862e-8331e6734b9c-serving-cert\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:41.497613 master-0 kubenswrapper[26425]: I0217 15:15:41.497562 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lgxgp" Feb 17 15:15:41.516213 master-0 kubenswrapper[26425]: I0217 15:15:41.516141 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:15:41.516701 master-0 kubenswrapper[26425]: I0217 15:15:41.516666 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-images\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.545796 master-0 kubenswrapper[26425]: I0217 15:15:41.545726 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:15:41.565092 master-0 kubenswrapper[26425]: I0217 15:15:41.565028 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:15:41.566836 master-0 kubenswrapper[26425]: I0217 15:15:41.566805 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2102e834-2b36-49de-a99e-c2dbe64d722f-mcd-auth-proxy-config\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:41.566919 master-0 kubenswrapper[26425]: I0217 15:15:41.566874 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/da06cfcb-7c78-4022-96b1-d858853f5adc-auth-proxy-config\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:41.567035 master-0 kubenswrapper[26425]: I0217 15:15:41.566983 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba1306f7-029b-4d43-ba3c-5738da9148d6-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:41.575554 master-0 kubenswrapper[26425]: I0217 15:15:41.575494 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-tphvr" Feb 17 15:15:41.596259 master-0 kubenswrapper[26425]: I0217 15:15:41.596199 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:15:41.596658 master-0 kubenswrapper[26425]: I0217 15:15:41.596629 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-apiservice-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:41.606169 master-0 kubenswrapper[26425]: I0217 15:15:41.606041 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b58e9d93-7683-440d-a603-9543e5455490-webhook-cert\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:41.615692 master-0 kubenswrapper[26425]: I0217 15:15:41.615637 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6c645" Feb 17 15:15:41.635657 master-0 kubenswrapper[26425]: I0217 15:15:41.635609 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mv24c" Feb 17 15:15:41.655899 master-0 kubenswrapper[26425]: I0217 15:15:41.655828 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-4cctd" Feb 17 15:15:41.676529 master-0 kubenswrapper[26425]: I0217 15:15:41.676448 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:15:41.687281 master-0 kubenswrapper[26425]: I0217 15:15:41.687232 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba1306f7-029b-4d43-ba3c-5738da9148d6-proxy-tls\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:41.696330 master-0 kubenswrapper[26425]: I0217 15:15:41.696282 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 17 15:15:41.705767 master-0 kubenswrapper[26425]: I0217 15:15:41.705726 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:15:41.716913 master-0 kubenswrapper[26425]: I0217 15:15:41.716869 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:15:41.725665 master-0 kubenswrapper[26425]: I0217 15:15:41.725622 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2102e834-2b36-49de-a99e-c2dbe64d722f-proxy-tls\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:41.736520 master-0 kubenswrapper[26425]: I0217 15:15:41.736452 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 15:15:41.746651 master-0 kubenswrapper[26425]: I0217 15:15:41.746596 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.755953 master-0 kubenswrapper[26425]: I0217 15:15:41.755911 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8gftr" Feb 17 15:15:41.776388 master-0 kubenswrapper[26425]: I0217 15:15:41.776342 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-r65rc" Feb 17 15:15:41.795938 master-0 kubenswrapper[26425]: I0217 15:15:41.795887 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:15:41.796369 master-0 kubenswrapper[26425]: I0217 15:15:41.796343 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-certs\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:41.816244 master-0 kubenswrapper[26425]: I0217 15:15:41.816190 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:15:41.826742 master-0 kubenswrapper[26425]: I0217 15:15:41.826707 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9768ef3d-4f12-4303-98cb-56f8ebe05039-node-bootstrap-token\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:41.835043 master-0 kubenswrapper[26425]: I0217 15:15:41.834994 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 15:15:41.836568 master-0 kubenswrapper[26425]: I0217 15:15:41.836510 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/784b804f-6bcf-4cbd-a19e-9b1fa244354e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.855932 master-0 kubenswrapper[26425]: I0217 15:15:41.855880 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 15:15:41.856352 master-0 kubenswrapper[26425]: I0217 15:15:41.856275 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c435347a-ac01-46af-8192-9ef2d632bdfb-metrics-client-ca\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:41.865972 master-0 kubenswrapper[26425]: I0217 15:15:41.865926 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/784b804f-6bcf-4cbd-a19e-9b1fa244354e-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:41.866098 master-0 kubenswrapper[26425]: I0217 15:15:41.865960 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cdbde712-c8dd-4011-adcb-af895abce94c-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:41.866098 master-0 kubenswrapper[26425]: I0217 15:15:41.866047 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:41.866179 master-0 kubenswrapper[26425]: I0217 15:15:41.866167 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:41.875983 master-0 kubenswrapper[26425]: I0217 15:15:41.875943 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:15:41.876149 master-0 kubenswrapper[26425]: I0217 15:15:41.876123 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-images\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.895715 master-0 kubenswrapper[26425]: I0217 15:15:41.895656 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t9g75" Feb 17 15:15:41.916274 master-0 kubenswrapper[26425]: I0217 15:15:41.916207 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:15:41.916738 master-0 kubenswrapper[26425]: I0217 15:15:41.916708 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/655e4000-0ad4-4349-8c31-e0c952e4be30-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.936952 master-0 kubenswrapper[26425]: I0217 15:15:41.936888 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:15:41.946527 master-0 kubenswrapper[26425]: I0217 15:15:41.946453 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655e4000-0ad4-4349-8c31-e0c952e4be30-config\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:41.955909 master-0 kubenswrapper[26425]: I0217 15:15:41.955867 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:15:41.976104 master-0 kubenswrapper[26425]: I0217 15:15:41.976034 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:15:41.987006 master-0 kubenswrapper[26425]: I0217 15:15:41.986936 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-auth-proxy-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:41.996170 master-0 kubenswrapper[26425]: I0217 15:15:41.996117 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:15:42.005908 master-0 kubenswrapper[26425]: I0217 15:15:42.005854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76d3da23-3347-4a5c-b328-d92671897ecc-machine-approver-tls\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:42.015392 master-0 kubenswrapper[26425]: I0217 15:15:42.015345 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:15:42.026011 master-0 kubenswrapper[26425]: I0217 15:15:42.025968 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d3da23-3347-4a5c-b328-d92671897ecc-config\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:42.035178 master-0 kubenswrapper[26425]: I0217 15:15:42.035140 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 15:15:42.036350 master-0 kubenswrapper[26425]: I0217 15:15:42.036314 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:42.055044 master-0 kubenswrapper[26425]: I0217 15:15:42.055007 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:15:42.076188 master-0 kubenswrapper[26425]: I0217 15:15:42.076122 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:15:42.096103 master-0 kubenswrapper[26425]: I0217 15:15:42.096029 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dzmf4" Feb 17 15:15:42.116619 master-0 kubenswrapper[26425]: I0217 15:15:42.116557 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jd7jr" Feb 17 15:15:42.117972 master-0 kubenswrapper[26425]: E0217 15:15:42.117927 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 17 15:15:42.135624 master-0 kubenswrapper[26425]: I0217 15:15:42.135572 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 15:15:42.146507 master-0 kubenswrapper[26425]: I0217 15:15:42.146420 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:42.155960 master-0 kubenswrapper[26425]: I0217 15:15:42.155920 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 15:15:42.166711 master-0 kubenswrapper[26425]: I0217 15:15:42.166672 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:42.176176 master-0 kubenswrapper[26425]: I0217 15:15:42.176126 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:15:42.186376 master-0 kubenswrapper[26425]: I0217 15:15:42.186322 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d56f334-6c7b-4c92-9665-56300d44f9a3-cert\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:42.195079 master-0 kubenswrapper[26425]: E0217 15:15:42.195027 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.195182 master-0 kubenswrapper[26425]: E0217 15:15:42.195133 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195243 master-0 kubenswrapper[26425]: E0217 15:15:42.195211 26425 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195281 master-0 kubenswrapper[26425]: E0217 15:15:42.195161 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195125338 +0000 UTC m=+5.086849196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.195320 master-0 kubenswrapper[26425]: E0217 15:15:42.195300 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls podName:c435347a-ac01-46af-8192-9ef2d632bdfb nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195280752 +0000 UTC m=+5.087004570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls") pod "node-exporter-rttp2" (UID: "c435347a-ac01-46af-8192-9ef2d632bdfb") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195416 master-0 kubenswrapper[26425]: E0217 15:15:42.195380 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195326443 +0000 UTC m=+5.087050281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195530 master-0 kubenswrapper[26425]: E0217 15:15:42.195499 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195642 master-0 kubenswrapper[26425]: E0217 15:15:42.195611 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195642 master-0 kubenswrapper[26425]: E0217 15:15:42.195624 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195709 master-0 kubenswrapper[26425]: E0217 15:15:42.195666 26425 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.195709 master-0 kubenswrapper[26425]: E0217 15:15:42.195693 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195578269 +0000 UTC m=+5.087302207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195709 master-0 kubenswrapper[26425]: E0217 15:15:42.195706 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195799 master-0 kubenswrapper[26425]: E0217 15:15:42.195693 26425 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195799 master-0 kubenswrapper[26425]: E0217 15:15:42.195750 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195714722 +0000 UTC m=+5.087438640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195799 master-0 kubenswrapper[26425]: E0217 15:15:42.195772 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195761943 +0000 UTC m=+5.087485891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195799 master-0 kubenswrapper[26425]: E0217 15:15:42.195787 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195817 26425 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195818 26425 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195823 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195783264 +0000 UTC m=+5.087507222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195790 26425 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195884 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195851565 +0000 UTC m=+5.087575523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.195917 master-0 kubenswrapper[26425]: E0217 15:15:42.195904 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195894906 +0000 UTC m=+5.087618864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.195928 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195917797 +0000 UTC m=+5.087641765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.195989 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs podName:75486ba2-6fde-456f-8846-2af67e58d585 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.195961068 +0000 UTC m=+5.087684926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs") pod "multus-admission-controller-6d678b8d67-rzbff" (UID: "75486ba2-6fde-456f-8846-2af67e58d585") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.196003 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.196018 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls podName:8379aee6-f810-4e5f-b209-8f6cb5f87df0 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196005949 +0000 UTC m=+5.087729797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls") pod "telemeter-client-7fbdcd9689-spqtt" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.196039 26425 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196076 master-0 kubenswrapper[26425]: E0217 15:15:42.196048 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.19603191 +0000 UTC m=+5.087755758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196080 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196085 26425 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196092 26425 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196104 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config podName:c435347a-ac01-46af-8192-9ef2d632bdfb nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196083131 +0000 UTC m=+5.087806989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config") pod "node-exporter-rttp2" (UID: "c435347a-ac01-46af-8192-9ef2d632bdfb") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196133 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap podName:9d97ff4f-48eb-4d9f-9d60-3e09f0bde040 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196119352 +0000 UTC m=+5.087843310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-z7lzs" (UID: "9d97ff4f-48eb-4d9f-9d60-3e09f0bde040") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196155 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196145544 +0000 UTC m=+5.087869512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196178 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls podName:cdbde712-c8dd-4011-adcb-af895abce94c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196167144 +0000 UTC m=+5.087891092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-b4xl8" (UID: "cdbde712-c8dd-4011-adcb-af895abce94c") : failed to sync secret cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196193 26425 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196234 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196220645 +0000 UTC m=+5.087944623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.196257 master-0 kubenswrapper[26425]: E0217 15:15:42.196259 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images podName:14723cb7-2d96-42b7-b559-70386c4c841c nodeName:}" failed. No retries permitted until 2026-02-17 15:15:43.196247596 +0000 UTC m=+5.087971834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" (UID: "14723cb7-2d96-42b7-b559-70386c4c841c") : failed to sync configmap cache: timed out waiting for the condition Feb 17 15:15:42.198627 master-0 kubenswrapper[26425]: I0217 15:15:42.198596 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 15:15:42.216044 master-0 kubenswrapper[26425]: I0217 15:15:42.215985 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zhjq" Feb 17 15:15:42.236057 master-0 kubenswrapper[26425]: I0217 15:15:42.236014 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:15:42.256229 master-0 kubenswrapper[26425]: I0217 15:15:42.256190 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-4gx6p" Feb 17 15:15:42.276498 master-0 kubenswrapper[26425]: I0217 15:15:42.276373 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 15:15:42.297124 master-0 kubenswrapper[26425]: I0217 15:15:42.296708 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dkdg8" Feb 17 15:15:42.315818 master-0 kubenswrapper[26425]: I0217 15:15:42.315760 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-kt686" Feb 17 15:15:42.336292 master-0 kubenswrapper[26425]: I0217 15:15:42.336232 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 15:15:42.356234 master-0 kubenswrapper[26425]: I0217 15:15:42.356175 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 17 15:15:42.378127 master-0 kubenswrapper[26425]: I0217 15:15:42.378012 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 17 15:15:42.395236 master-0 kubenswrapper[26425]: I0217 15:15:42.395198 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:15:42.416232 master-0 kubenswrapper[26425]: I0217 15:15:42.416174 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:15:42.437165 master-0 kubenswrapper[26425]: I0217 15:15:42.436527 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 17 15:15:42.454999 master-0 kubenswrapper[26425]: I0217 15:15:42.454950 26425 request.go:700] Waited for 2.968835672s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-state-metrics-custom-resource-state-configmap&limit=500&resourceVersion=0 Feb 17 15:15:42.456747 master-0 kubenswrapper[26425]: I0217 15:15:42.456696 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 15:15:42.484600 master-0 kubenswrapper[26425]: I0217 15:15:42.484522 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 17 15:15:42.496066 master-0 kubenswrapper[26425]: I0217 15:15:42.496011 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-wc6mx" Feb 17 15:15:42.516070 master-0 kubenswrapper[26425]: I0217 15:15:42.516024 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 17 15:15:42.536570 master-0 kubenswrapper[26425]: I0217 15:15:42.536525 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 17 15:15:42.556311 master-0 kubenswrapper[26425]: I0217 15:15:42.556246 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 17 15:15:42.575827 master-0 kubenswrapper[26425]: I0217 15:15:42.575763 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 17 15:15:42.596216 master-0 kubenswrapper[26425]: I0217 15:15:42.596152 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 17 15:15:42.616140 master-0 kubenswrapper[26425]: I0217 15:15:42.616096 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:15:42.635819 master-0 kubenswrapper[26425]: I0217 15:15:42.635732 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:15:42.656196 master-0 kubenswrapper[26425]: I0217 15:15:42.656145 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 15:15:42.676117 master-0 kubenswrapper[26425]: I0217 15:15:42.676049 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gbdz4" Feb 17 15:15:42.697261 master-0 kubenswrapper[26425]: I0217 15:15:42.697214 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 15:15:42.716541 master-0 kubenswrapper[26425]: I0217 15:15:42.716481 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-aaauri1gstf68" Feb 17 15:15:42.736346 master-0 kubenswrapper[26425]: I0217 15:15:42.736271 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 15:15:42.755767 master-0 kubenswrapper[26425]: I0217 15:15:42.755719 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 15:15:42.777114 master-0 kubenswrapper[26425]: I0217 15:15:42.777055 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:15:42.796191 master-0 kubenswrapper[26425]: I0217 15:15:42.796155 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:15:42.815657 master-0 kubenswrapper[26425]: I0217 15:15:42.815625 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-tmw8w" Feb 17 15:15:42.836885 master-0 kubenswrapper[26425]: I0217 15:15:42.836822 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 17 15:15:42.880239 master-0 kubenswrapper[26425]: I0217 15:15:42.880121 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwfb\" (UniqueName: \"kubernetes.io/projected/4fd2c79d-1e10-4f09-8a33-c66598abc99a-kube-api-access-mgwfb\") pod \"network-operator-6fcf4c966-l24cg\" (UID: \"4fd2c79d-1e10-4f09-8a33-c66598abc99a\") " pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" Feb 17 15:15:42.893940 master-0 kubenswrapper[26425]: I0217 15:15:42.893795 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghlk\" (UniqueName: \"kubernetes.io/projected/833c8661-28ca-463a-ac61-6edb961056e3-kube-api-access-2ghlk\") pod \"redhat-operators-wzsv7\" (UID: \"833c8661-28ca-463a-ac61-6edb961056e3\") " pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:42.920574 master-0 kubenswrapper[26425]: I0217 15:15:42.920501 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lwz4\" (UniqueName: \"kubernetes.io/projected/68954d1e-2147-4465-9817-a3c04cbc19b0-kube-api-access-4lwz4\") pod \"catalogd-controller-manager-67bc7c997f-jdfsm\" (UID: \"68954d1e-2147-4465-9817-a3c04cbc19b0\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:42.940921 master-0 kubenswrapper[26425]: I0217 15:15:42.940827 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8s7\" (UniqueName: \"kubernetes.io/projected/aa267e55-eef2-447f-b2ff-57c1ec2930be-kube-api-access-nx8s7\") pod \"node-resolver-tzv2h\" (UID: \"aa267e55-eef2-447f-b2ff-57c1ec2930be\") " pod="openshift-dns/node-resolver-tzv2h" Feb 17 15:15:42.964053 master-0 kubenswrapper[26425]: I0217 15:15:42.963973 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27gfx\" (UniqueName: \"kubernetes.io/projected/b4422676-9a70-4973-8299-7b40a66e9c96-kube-api-access-27gfx\") pod \"control-plane-machine-set-operator-d8bf84b88-hmpc7\" (UID: \"b4422676-9a70-4973-8299-7b40a66e9c96\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" Feb 17 15:15:42.977879 master-0 kubenswrapper[26425]: I0217 15:15:42.977819 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmp42\" (UniqueName: \"kubernetes.io/projected/124ba199-b79a-4e5c-8512-cc0ae50f73c8-kube-api-access-dmp42\") pod \"apiserver-865765995-c58rq\" (UID: \"124ba199-b79a-4e5c-8512-cc0ae50f73c8\") " pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:42.999026 master-0 kubenswrapper[26425]: I0217 15:15:42.998960 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmdw\" (UniqueName: \"kubernetes.io/projected/94f5fac8-582e-44a3-8dd5-c4e6e80829ef-kube-api-access-cpmdw\") pod \"redhat-marketplace-7dzgz\" (UID: \"94f5fac8-582e-44a3-8dd5-c4e6e80829ef\") " pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:43.019026 master-0 kubenswrapper[26425]: I0217 15:15:43.018952 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjqf\" (UniqueName: \"kubernetes.io/projected/0c58265d-32fb-4cf0-97d8-6c9a5d37fad9-kube-api-access-gxjqf\") pod \"kube-storage-version-migrator-operator-cd5474998-tckph\" (UID: \"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" Feb 17 15:15:43.048083 master-0 kubenswrapper[26425]: I0217 15:15:43.048018 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8jf\" (UniqueName: \"kubernetes.io/projected/a2d6e329-7ad8-4fc2-accc-66827f11743d-kube-api-access-8q8jf\") pod \"router-default-864ddd5f56-g8w2f\" (UID: \"a2d6e329-7ad8-4fc2-accc-66827f11743d\") " pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:43.051032 master-0 kubenswrapper[26425]: I0217 15:15:43.050981 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klfm5\" (UniqueName: \"kubernetes.io/projected/52b28595-f0fc-49e2-9c95-43e5f1eb003f-kube-api-access-klfm5\") pod \"migrator-5bd989df77-hrl5d\" (UID: \"52b28595-f0fc-49e2-9c95-43e5f1eb003f\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d" Feb 17 15:15:43.073403 master-0 kubenswrapper[26425]: I0217 15:15:43.073333 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb9r\" (UniqueName: \"kubernetes.io/projected/d973c9bc-8097-489c-9b8b-70b775177c41-kube-api-access-gkb9r\") pod \"network-check-source-7d8f4c8c66-fc8n7\" (UID: \"d973c9bc-8097-489c-9b8b-70b775177c41\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7" Feb 17 15:15:43.087036 master-0 kubenswrapper[26425]: I0217 15:15:43.086979 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562gp\" (UniqueName: \"kubernetes.io/projected/fb94b2b6-21a9-41bb-b822-9406a3ebb1e9-kube-api-access-562gp\") pod \"multus-9r5rl\" (UID: \"fb94b2b6-21a9-41bb-b822-9406a3ebb1e9\") " pod="openshift-multus/multus-9r5rl" Feb 17 15:15:43.118190 master-0 kubenswrapper[26425]: I0217 15:15:43.118130 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/553d4535-9985-47e2-83ee-8fcfb6035e7b-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-xvzq9\" (UID: \"553d4535-9985-47e2-83ee-8fcfb6035e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" Feb 17 15:15:43.133693 master-0 kubenswrapper[26425]: I0217 15:15:43.133642 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrph\" (UniqueName: \"kubernetes.io/projected/c97d328c-95b6-4511-aa90-531ab42b9653-kube-api-access-qzrph\") pod \"cloud-credential-operator-595c8f9ff-p8hbc\" (UID: \"c97d328c-95b6-4511-aa90-531ab42b9653\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" Feb 17 15:15:43.152998 master-0 kubenswrapper[26425]: I0217 15:15:43.152875 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2dv\" (UniqueName: \"kubernetes.io/projected/c33efa80-fbeb-438a-86e3-d22d7c12d3e9-kube-api-access-zr2dv\") pod \"community-operators-t8vtc\" (UID: \"c33efa80-fbeb-438a-86e3-d22d7c12d3e9\") " pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:43.174010 master-0 kubenswrapper[26425]: I0217 15:15:43.173920 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpq86\" (UniqueName: \"kubernetes.io/projected/7c6b911d-8db2-48e8-bce9-d4bcde1f55a0-kube-api-access-cpq86\") pod \"network-node-identity-xwftw\" (UID: \"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0\") " pod="openshift-network-node-identity/network-node-identity-xwftw" Feb 17 15:15:43.192009 master-0 kubenswrapper[26425]: I0217 15:15:43.191946 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/626c4f7a-59ee-45da-9198-05dd2c42ac42-kube-api-access\") pod \"cluster-version-operator-649c4f5445-7kdb7\" (UID: \"626c4f7a-59ee-45da-9198-05dd2c42ac42\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" Feb 17 15:15:43.211047 master-0 kubenswrapper[26425]: I0217 15:15:43.210993 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcb68\" (UniqueName: \"kubernetes.io/projected/f2546ffc-8d0a-4010-a3bd-9e69b6dbea40-kube-api-access-jcb68\") pod \"etcd-operator-67bf55ccdd-pjm6n\" (UID: \"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:15:43.230056 master-0 kubenswrapper[26425]: I0217 15:15:43.229993 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2vg\" (UniqueName: \"kubernetes.io/projected/bf74b8c3-a5a6-4fb9-9d12-3a47c759f699-kube-api-access-6t2vg\") pod \"cluster-monitoring-operator-756d64c8c4-ddgs9\" (UID: \"bf74b8c3-a5a6-4fb9-9d12-3a47c759f699\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9" Feb 17 15:15:43.237248 master-0 kubenswrapper[26425]: I0217 15:15:43.237190 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:43.237248 master-0 kubenswrapper[26425]: I0217 15:15:43.237234 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.237545 master-0 kubenswrapper[26425]: I0217 15:15:43.237303 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.237672 master-0 kubenswrapper[26425]: I0217 15:15:43.237618 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:43.237672 master-0 kubenswrapper[26425]: I0217 15:15:43.237624 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:43.237867 master-0 kubenswrapper[26425]: I0217 15:15:43.237825 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.237867 master-0 kubenswrapper[26425]: I0217 15:15:43.237819 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238209 master-0 kubenswrapper[26425]: I0217 15:15:43.237946 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-tls\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:43.238209 master-0 kubenswrapper[26425]: I0217 15:15:43.238090 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238209 master-0 kubenswrapper[26425]: I0217 15:15:43.238127 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238209 master-0 kubenswrapper[26425]: I0217 15:15:43.238171 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.238494 master-0 kubenswrapper[26425]: I0217 15:15:43.238240 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238494 master-0 kubenswrapper[26425]: I0217 15:15:43.238336 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.238494 master-0 kubenswrapper[26425]: I0217 15:15:43.238407 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238736 master-0 kubenswrapper[26425]: I0217 15:15:43.238523 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238736 master-0 kubenswrapper[26425]: I0217 15:15:43.238545 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238736 master-0 kubenswrapper[26425]: I0217 15:15:43.238681 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.238736 master-0 kubenswrapper[26425]: I0217 15:15:43.238725 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.239011 master-0 kubenswrapper[26425]: I0217 15:15:43.238773 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.239011 master-0 kubenswrapper[26425]: I0217 15:15:43.238807 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.239011 master-0 kubenswrapper[26425]: I0217 15:15:43.238867 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239011 master-0 kubenswrapper[26425]: I0217 15:15:43.238957 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239033 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239049 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239100 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239138 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239206 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239282 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239321 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14723cb7-2d96-42b7-b559-70386c4c841c-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.239329 master-0 kubenswrapper[26425]: I0217 15:15:43.239336 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c435347a-ac01-46af-8192-9ef2d632bdfb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239380 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239516 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239541 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239600 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239618 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239736 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14723cb7-2d96-42b7-b559-70386c4c841c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.239880 master-0 kubenswrapper[26425]: I0217 15:15:43.239849 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdbde712-c8dd-4011-adcb-af895abce94c-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:43.240348 master-0 kubenswrapper[26425]: I0217 15:15:43.239916 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:43.252234 master-0 kubenswrapper[26425]: I0217 15:15:43.252172 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2m4\" (UniqueName: \"kubernetes.io/projected/31e31afc-79d5-46f4-9835-0fd11da9465f-kube-api-access-jh2m4\") pod \"ovnkube-control-plane-bb7ffbb8d-rj245\" (UID: \"31e31afc-79d5-46f4-9835-0fd11da9465f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" Feb 17 15:15:43.268156 master-0 kubenswrapper[26425]: I0217 15:15:43.268057 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8h7\" (UniqueName: \"kubernetes.io/projected/257db04b-7203-4a1d-b3d4-bd4db258a3cc-kube-api-access-jg8h7\") pod \"olm-operator-6b56bd877c-tk8xm\" (UID: \"257db04b-7203-4a1d-b3d4-bd4db258a3cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:43.292311 master-0 kubenswrapper[26425]: I0217 15:15:43.292239 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzqs\" (UniqueName: \"kubernetes.io/projected/fb153362-0abb-4aad-8975-532f6e72d032-kube-api-access-7bzqs\") pod \"multus-additional-cni-plugins-9nv95\" (UID: \"fb153362-0abb-4aad-8975-532f6e72d032\") " pod="openshift-multus/multus-additional-cni-plugins-9nv95" Feb 17 15:15:43.311290 master-0 kubenswrapper[26425]: I0217 15:15:43.311228 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbmb9\" (UniqueName: \"kubernetes.io/projected/129dba1e-73df-4ea4-96c0-3eba78d568ba-kube-api-access-rbmb9\") pod \"csi-snapshot-controller-74b6595c6d-q4766\" (UID: \"129dba1e-73df-4ea4-96c0-3eba78d568ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" Feb 17 15:15:43.327720 master-0 kubenswrapper[26425]: I0217 15:15:43.327665 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brbd\" (UniqueName: \"kubernetes.io/projected/fce9579e-7383-421e-95dd-8f8b786817f9-kube-api-access-7brbd\") pod \"network-metrics-daemon-bnllz\" (UID: \"fce9579e-7383-421e-95dd-8f8b786817f9\") " pod="openshift-multus/network-metrics-daemon-bnllz" Feb 17 15:15:43.359796 master-0 kubenswrapper[26425]: I0217 15:15:43.359720 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgs5v\" (UniqueName: \"kubernetes.io/projected/9a905fb6-17d4-413b-9107-859c804ce906-kube-api-access-mgs5v\") pod \"ovnkube-node-vdgrn\" (UID: \"9a905fb6-17d4-413b-9107-859c804ce906\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:43.371222 master-0 kubenswrapper[26425]: I0217 15:15:43.371148 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g48f\" (UniqueName: \"kubernetes.io/projected/50c51fe2-32aa-430f-8da0-7cf3b9519131-kube-api-access-8g48f\") pod \"operator-controller-controller-manager-85c9b89969-4n2ls\" (UID: \"50c51fe2-32aa-430f-8da0-7cf3b9519131\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:43.391532 master-0 kubenswrapper[26425]: I0217 15:15:43.391413 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh2k\" (UniqueName: \"kubernetes.io/projected/071566ae-a9ae-4aa9-9dc3-38602363be72-kube-api-access-hrh2k\") pod \"cluster-node-tuning-operator-ff6c9b66-k8xp8\" (UID: \"071566ae-a9ae-4aa9-9dc3-38602363be72\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" Feb 17 15:15:43.424618 master-0 kubenswrapper[26425]: I0217 15:15:43.424448 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wxf\" (UniqueName: \"kubernetes.io/projected/08e27254-e906-484a-b346-036f898be3ae-kube-api-access-d8wxf\") pod \"catalog-operator-588944557d-kjh2v\" (UID: \"08e27254-e906-484a-b346-036f898be3ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:43.440232 master-0 kubenswrapper[26425]: I0217 15:15:43.440166 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwpz\" (UniqueName: \"kubernetes.io/projected/fc216ba1-144a-4cc8-93db-85ab558a166a-kube-api-access-7gwpz\") pod \"certified-operators-2lg56\" (UID: \"fc216ba1-144a-4cc8-93db-85ab558a166a\") " pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:43.451736 master-0 kubenswrapper[26425]: I0217 15:15:43.451686 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnhjw\" (UniqueName: \"kubernetes.io/projected/4b2b7830-6ee0-4d87-a57b-dc668de4b39a-kube-api-access-pnhjw\") pod \"tuned-2ffzt\" (UID: \"4b2b7830-6ee0-4d87-a57b-dc668de4b39a\") " pod="openshift-cluster-node-tuning-operator/tuned-2ffzt" Feb 17 15:15:43.471766 master-0 kubenswrapper[26425]: I0217 15:15:43.471705 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"controller-manager-b9c8fdfbc-rh9v2\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:43.474384 master-0 kubenswrapper[26425]: I0217 15:15:43.474352 26425 request.go:700] Waited for 3.951156933s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token Feb 17 15:15:43.491618 master-0 kubenswrapper[26425]: I0217 15:15:43.491567 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwptc\" (UniqueName: \"kubernetes.io/projected/8d317dcb-ea6a-4066-b197-5ee960dec01a-kube-api-access-nwptc\") pod \"dns-default-wxhtx\" (UID: \"8d317dcb-ea6a-4066-b197-5ee960dec01a\") " pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:43.522704 master-0 kubenswrapper[26425]: I0217 15:15:43.522622 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwhf\" (UniqueName: \"kubernetes.io/projected/727f20b6-19c7-45eb-a803-6898ecaeffd0-kube-api-access-bpwhf\") pod \"network-check-target-f25s7\" (UID: \"727f20b6-19c7-45eb-a803-6898ecaeffd0\") " pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:15:43.539251 master-0 kubenswrapper[26425]: I0217 15:15:43.539170 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"route-controller-manager-6978b88779-vp5tv\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:43.559488 master-0 kubenswrapper[26425]: I0217 15:15:43.559412 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswxb\" (UniqueName: \"kubernetes.io/projected/b0f95c87-6a4a-44f2-b6d4-18f167ea430f-kube-api-access-gswxb\") pod \"service-ca-676cd8b9b5-bfm5s\" (UID: \"b0f95c87-6a4a-44f2-b6d4-18f167ea430f\") " pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" Feb 17 15:15:43.580229 master-0 kubenswrapper[26425]: I0217 15:15:43.580155 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpwm\" (UniqueName: \"kubernetes.io/projected/632fa4c3-b717-432c-8c5f-8d809f69c48b-kube-api-access-8bpwm\") pod \"iptables-alerter-v2h9q\" (UID: \"632fa4c3-b717-432c-8c5f-8d809f69c48b\") " pod="openshift-network-operator/iptables-alerter-v2h9q" Feb 17 15:15:43.597664 master-0 kubenswrapper[26425]: I0217 15:15:43.597577 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czt92\" (UniqueName: \"kubernetes.io/projected/c6d23570-21d6-4b08-83fc-8b0827c25313-kube-api-access-czt92\") pod \"marketplace-operator-6cc5b65c6b-wqxmh\" (UID: \"c6d23570-21d6-4b08-83fc-8b0827c25313\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:43.620642 master-0 kubenswrapper[26425]: I0217 15:15:43.620584 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/6b7d1adb-b23b-4702-be7d-27e818e8fd63-kube-api-access-cr7lv\") pod \"cluster-samples-operator-f8cbff74c-hr9g4\" (UID: \"6b7d1adb-b23b-4702-be7d-27e818e8fd63\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4" Feb 17 15:15:43.638423 master-0 kubenswrapper[26425]: I0217 15:15:43.638365 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b167b7b-2280-4c82-ac78-71c57aebe503-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-wcpf8\" (UID: \"2b167b7b-2280-4c82-ac78-71c57aebe503\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" Feb 17 15:15:43.662839 master-0 kubenswrapper[26425]: I0217 15:15:43.662742 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8df\" (UniqueName: \"kubernetes.io/projected/33e819b0-5a3f-4c2d-9dc7-8b0231804cdb-kube-api-access-wn8df\") pod \"package-server-manager-5c696dbdcd-t7n5b\" (UID: \"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:43.680255 master-0 kubenswrapper[26425]: I0217 15:15:43.680132 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2tcz\" (UniqueName: \"kubernetes.io/projected/1d481a79-f565-4c7f-84cc-207fc3117c23-kube-api-access-d2tcz\") pod \"apiserver-6bd884947c-tdlbn\" (UID: \"1d481a79-f565-4c7f-84cc-207fc3117c23\") " pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:43.699442 master-0 kubenswrapper[26425]: I0217 15:15:43.699345 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhm88\" (UniqueName: \"kubernetes.io/projected/76d3da23-3347-4a5c-b328-d92671897ecc-kube-api-access-jhm88\") pod \"machine-approver-8569dd85ff-f9g8s\" (UID: \"76d3da23-3347-4a5c-b328-d92671897ecc\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" Feb 17 15:15:43.720427 master-0 kubenswrapper[26425]: I0217 15:15:43.720322 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wh2\" (UniqueName: \"kubernetes.io/projected/c8646e5c-c2ce-48e6-b757-58044769f479-kube-api-access-t9wh2\") pod \"cluster-autoscaler-operator-67fd9768b5-6dzpr\" (UID: \"c8646e5c-c2ce-48e6-b757-58044769f479\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" Feb 17 15:15:43.738098 master-0 kubenswrapper[26425]: I0217 15:15:43.738006 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:15:43.760386 master-0 kubenswrapper[26425]: I0217 15:15:43.760324 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcj2\" (UniqueName: \"kubernetes.io/projected/9d97ff4f-48eb-4d9f-9d60-3e09f0bde040-kube-api-access-4rcj2\") pod \"kube-state-metrics-7cc9598d54-z7lzs\" (UID: \"9d97ff4f-48eb-4d9f-9d60-3e09f0bde040\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs" Feb 17 15:15:43.778492 master-0 kubenswrapper[26425]: I0217 15:15:43.778371 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnxm\" (UniqueName: \"kubernetes.io/projected/8385a176-0e12-47ef-862e-8331e6734b9c-kube-api-access-lnnxm\") pod \"insights-operator-cb4f7b4cf-cmbjq\" (UID: \"8385a176-0e12-47ef-862e-8331e6734b9c\") " pod="openshift-insights/insights-operator-cb4f7b4cf-cmbjq" Feb 17 15:15:43.800441 master-0 kubenswrapper[26425]: I0217 15:15:43.800376 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5jv\" (UniqueName: \"kubernetes.io/projected/ad81b5bd-2f97-4e7e-a12b-746998fa59f2-kube-api-access-9t5jv\") pod \"cluster-storage-operator-75b869db96-qbmw5\" (UID: \"ad81b5bd-2f97-4e7e-a12b-746998fa59f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" Feb 17 15:15:43.818866 master-0 kubenswrapper[26425]: I0217 15:15:43.818777 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf69t\" (UniqueName: \"kubernetes.io/projected/655e4000-0ad4-4349-8c31-e0c952e4be30-kube-api-access-qf69t\") pod \"machine-api-operator-bd7dd5c46-g6fgz\" (UID: \"655e4000-0ad4-4349-8c31-e0c952e4be30\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" Feb 17 15:15:43.828277 master-0 kubenswrapper[26425]: I0217 15:15:43.828184 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ckv\" (UniqueName: \"kubernetes.io/projected/6d56f334-6c7b-4c92-9665-56300d44f9a3-kube-api-access-k8ckv\") pod \"ingress-canary-6bhf8\" (UID: \"6d56f334-6c7b-4c92-9665-56300d44f9a3\") " pod="openshift-ingress-canary/ingress-canary-6bhf8" Feb 17 15:15:43.858921 master-0 kubenswrapper[26425]: I0217 15:15:43.858826 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsd7\" (UniqueName: \"kubernetes.io/projected/da06cfcb-7c78-4022-96b1-d858853f5adc-kube-api-access-xpsd7\") pod \"machine-config-operator-84976bb859-kmc95\" (UID: \"da06cfcb-7c78-4022-96b1-d858853f5adc\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" Feb 17 15:15:43.880089 master-0 kubenswrapper[26425]: I0217 15:15:43.879998 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pn82\" (UniqueName: \"kubernetes.io/projected/ba1306f7-029b-4d43-ba3c-5738da9148d6-kube-api-access-7pn82\") pod \"machine-config-controller-686c884b4d-5q97f\" (UID: \"ba1306f7-029b-4d43-ba3c-5738da9148d6\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" Feb 17 15:15:43.899718 master-0 kubenswrapper[26425]: I0217 15:15:43.899621 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"telemeter-client-7fbdcd9689-spqtt\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:15:43.908654 master-0 kubenswrapper[26425]: I0217 15:15:43.908600 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6jm\" (UniqueName: \"kubernetes.io/projected/9768ef3d-4f12-4303-98cb-56f8ebe05039-kube-api-access-tk6jm\") pod \"machine-config-server-l576h\" (UID: \"9768ef3d-4f12-4303-98cb-56f8ebe05039\") " pod="openshift-machine-config-operator/machine-config-server-l576h" Feb 17 15:15:43.941349 master-0 kubenswrapper[26425]: I0217 15:15:43.941137 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"metrics-server-f94977f65-sgf5z\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:15:43.957736 master-0 kubenswrapper[26425]: I0217 15:15:43.957644 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lw7x\" (UniqueName: \"kubernetes.io/projected/14723cb7-2d96-42b7-b559-70386c4c841c-kube-api-access-7lw7x\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c\" (UID: \"14723cb7-2d96-42b7-b559-70386c4c841c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" Feb 17 15:15:43.980560 master-0 kubenswrapper[26425]: I0217 15:15:43.980450 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5w6f\" (UniqueName: \"kubernetes.io/projected/c435347a-ac01-46af-8192-9ef2d632bdfb-kube-api-access-j5w6f\") pod \"node-exporter-rttp2\" (UID: \"c435347a-ac01-46af-8192-9ef2d632bdfb\") " pod="openshift-monitoring/node-exporter-rttp2" Feb 17 15:15:43.986256 master-0 kubenswrapper[26425]: I0217 15:15:43.986189 26425 scope.go:117] "RemoveContainer" containerID="d42cd385a169cd36ec041c3a6e5a8a617ea41d6c13c8210a911ad86286cc0ade" Feb 17 15:15:43.998184 master-0 kubenswrapper[26425]: I0217 15:15:43.998126 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx29\" (UniqueName: \"kubernetes.io/projected/784b804f-6bcf-4cbd-a19e-9b1fa244354e-kube-api-access-8cx29\") pod \"prometheus-operator-7485d645b8-nzz2j\" (UID: \"784b804f-6bcf-4cbd-a19e-9b1fa244354e\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-nzz2j" Feb 17 15:15:44.018167 master-0 kubenswrapper[26425]: I0217 15:15:44.018086 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"multus-admission-controller-6d678b8d67-rzbff\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:15:44.040012 master-0 kubenswrapper[26425]: I0217 15:15:44.039896 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2d4n\" (UniqueName: \"kubernetes.io/projected/b58e9d93-7683-440d-a603-9543e5455490-kube-api-access-l2d4n\") pod \"packageserver-67d4dbd88b-szr25\" (UID: \"b58e9d93-7683-440d-a603-9543e5455490\") " pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:44.066351 master-0 kubenswrapper[26425]: I0217 15:15:44.066276 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2mb\" (UniqueName: \"kubernetes.io/projected/2102e834-2b36-49de-a99e-c2dbe64d722f-kube-api-access-hq2mb\") pod \"machine-config-daemon-r6sfp\" (UID: \"2102e834-2b36-49de-a99e-c2dbe64d722f\") " pod="openshift-machine-config-operator/machine-config-daemon-r6sfp" Feb 17 15:15:44.078791 master-0 kubenswrapper[26425]: I0217 15:15:44.078684 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrmf\" (UniqueName: \"kubernetes.io/projected/7307f70e-ee5b-4f81-8155-718a02c9efe7-kube-api-access-dzrmf\") pod \"cluster-baremetal-operator-7bc947fc7d-8qkdw\" (UID: \"7307f70e-ee5b-4f81-8155-718a02c9efe7\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" Feb 17 15:15:44.097793 master-0 kubenswrapper[26425]: I0217 15:15:44.097722 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"installer-2-master-0\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " pod="openshift-etcd/installer-2-master-0" Feb 17 15:15:44.113214 master-0 kubenswrapper[26425]: I0217 15:15:44.113135 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fj8w\" (UniqueName: \"kubernetes.io/projected/cdbde712-c8dd-4011-adcb-af895abce94c-kube-api-access-9fj8w\") pod \"openshift-state-metrics-546cc7d765-b4xl8\" (UID: \"cdbde712-c8dd-4011-adcb-af895abce94c\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8" Feb 17 15:15:44.145879 master-0 kubenswrapper[26425]: E0217 15:15:44.145819 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:44.165003 master-0 kubenswrapper[26425]: E0217 15:15:44.164925 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:44.197309 master-0 kubenswrapper[26425]: E0217 15:15:44.197205 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:44.204961 master-0 kubenswrapper[26425]: E0217 15:15:44.204913 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:44.226238 master-0 kubenswrapper[26425]: E0217 15:15:44.226174 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:44.226238 master-0 kubenswrapper[26425]: E0217 15:15:44.226219 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:44.227013 master-0 kubenswrapper[26425]: E0217 15:15:44.226408 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:15:44.726377397 +0000 UTC m=+6.618101255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:44.248807 master-0 kubenswrapper[26425]: E0217 15:15:44.248756 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.846s" Feb 17 15:15:44.248909 master-0 kubenswrapper[26425]: I0217 15:15:44.248810 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg" event={"ID":"22a30079-d7fc-49cf-882e-1c5022cb5bf6","Type":"ContainerStarted","Data":"f9de1615e5ecfc58dcdfa3129c6217b6efa7268a95d664228821e520bb4b22d1"} Feb 17 15:15:44.262125 master-0 kubenswrapper[26425]: I0217 15:15:44.262057 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 17 15:15:44.278829 master-0 kubenswrapper[26425]: I0217 15:15:44.278776 26425 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 17 15:15:44.278957 master-0 kubenswrapper[26425]: I0217 15:15:44.278904 26425 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298179 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298231 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298306 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298323 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298335 26425 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3716863b-22a6-4f57-9c98-e5f2c96e601c" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298387 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298412 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298423 26425 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="3716863b-22a6-4f57-9c98-e5f2c96e601c" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298489 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wxhtx" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298522 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298634 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298653 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:44.298746 master-0 kubenswrapper[26425]: I0217 15:15:44.298699 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:44.312480 master-0 kubenswrapper[26425]: I0217 15:15:44.299533 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:44.312480 master-0 kubenswrapper[26425]: I0217 15:15:44.299766 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:44.312480 master-0 kubenswrapper[26425]: I0217 15:15:44.299807 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:44.312480 master-0 kubenswrapper[26425]: I0217 15:15:44.300034 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:44.333229 master-0 kubenswrapper[26425]: I0217 15:15:44.333154 26425 patch_prober.go:28] interesting pod/router-default-864ddd5f56-g8w2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:15:44.333229 master-0 kubenswrapper[26425]: [-]has-synced failed: reason withheld Feb 17 15:15:44.333229 master-0 kubenswrapper[26425]: [+]process-running ok Feb 17 15:15:44.333229 master-0 kubenswrapper[26425]: healthz check failed Feb 17 15:15:44.333578 master-0 kubenswrapper[26425]: I0217 15:15:44.333241 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" podUID="a2d6e329-7ad8-4fc2-accc-66827f11743d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:15:44.381135 master-0 kubenswrapper[26425]: I0217 15:15:44.380414 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:44.488762 master-0 kubenswrapper[26425]: I0217 15:15:44.488601 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:44.713809 master-0 kubenswrapper[26425]: I0217 15:15:44.713725 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:15:44.714781 master-0 kubenswrapper[26425]: I0217 15:15:44.714636 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:44.714781 master-0 kubenswrapper[26425]: I0217 15:15:44.714632 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"a1e93878f21dc286acdc9c2a71dd4681ce4b10895942634bda73aa32b74dbbba"} Feb 17 15:15:44.714781 master-0 kubenswrapper[26425]: I0217 15:15:44.714702 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:44.715022 master-0 kubenswrapper[26425]: I0217 15:15:44.714975 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:44.725037 master-0 kubenswrapper[26425]: I0217 15:15:44.724984 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:44.737438 master-0 kubenswrapper[26425]: I0217 15:15:44.737402 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 17 15:15:44.782414 master-0 kubenswrapper[26425]: I0217 15:15:44.781489 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:44.784072 master-0 kubenswrapper[26425]: E0217 15:15:44.783938 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:44.784072 master-0 kubenswrapper[26425]: E0217 15:15:44.783989 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:44.784072 master-0 kubenswrapper[26425]: E0217 15:15:44.784064 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:15:45.784035339 +0000 UTC m=+7.675759187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:45.051575 master-0 kubenswrapper[26425]: I0217 15:15:45.051283 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.051254642 podStartE2EDuration="6.051254642s" podCreationTimestamp="2026-02-17 15:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:45.049805735 +0000 UTC m=+6.941529623" watchObservedRunningTime="2026-02-17 15:15:45.051254642 +0000 UTC m=+6.942978500" Feb 17 15:15:45.279026 master-0 kubenswrapper[26425]: I0217 15:15:45.278885 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=6.278854395 podStartE2EDuration="6.278854395s" podCreationTimestamp="2026-02-17 15:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:15:45.240886508 +0000 UTC m=+7.132610366" watchObservedRunningTime="2026-02-17 15:15:45.278854395 +0000 UTC m=+7.170578243" Feb 17 15:15:45.328246 master-0 kubenswrapper[26425]: I0217 15:15:45.328120 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:45.402760 master-0 kubenswrapper[26425]: I0217 15:15:45.402708 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:15:45.405772 master-0 kubenswrapper[26425]: I0217 15:15:45.405751 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 17 15:15:45.431175 master-0 kubenswrapper[26425]: I0217 15:15:45.431115 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:45.439123 master-0 kubenswrapper[26425]: I0217 15:15:45.439090 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:45.446076 master-0 kubenswrapper[26425]: I0217 15:15:45.446009 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" Feb 17 15:15:45.722388 master-0 kubenswrapper[26425]: I0217 15:15:45.722331 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:45.787942 master-0 kubenswrapper[26425]: I0217 15:15:45.787810 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:45.788177 master-0 kubenswrapper[26425]: I0217 15:15:45.788049 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:45.797068 master-0 kubenswrapper[26425]: I0217 15:15:45.797009 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:45.815763 master-0 kubenswrapper[26425]: I0217 15:15:45.815385 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:45.816421 master-0 kubenswrapper[26425]: E0217 15:15:45.816387 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:45.816421 master-0 kubenswrapper[26425]: E0217 15:15:45.816411 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:45.816546 master-0 kubenswrapper[26425]: E0217 15:15:45.816474 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:15:47.816439437 +0000 UTC m=+9.708163255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:46.201749 master-0 kubenswrapper[26425]: I0217 15:15:46.201648 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:46.205147 master-0 kubenswrapper[26425]: I0217 15:15:46.205051 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:46.205530 master-0 kubenswrapper[26425]: I0217 15:15:46.205496 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:46.206747 master-0 kubenswrapper[26425]: I0217 15:15:46.206689 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25" Feb 17 15:15:46.211623 master-0 kubenswrapper[26425]: I0217 15:15:46.211559 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:15:46.410097 master-0 kubenswrapper[26425]: I0217 15:15:46.410024 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f668c36-2d45-4b5d-89df-b8ed9bf97640" path="/var/lib/kubelet/pods/0f668c36-2d45-4b5d-89df-b8ed9bf97640/volumes" Feb 17 15:15:46.685596 master-0 kubenswrapper[26425]: I0217 15:15:46.685511 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:46.695716 master-0 kubenswrapper[26425]: I0217 15:15:46.695664 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:46.708806 master-0 kubenswrapper[26425]: I0217 15:15:46.708718 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:46.732663 master-0 kubenswrapper[26425]: I0217 15:15:46.731146 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:46.739848 master-0 kubenswrapper[26425]: I0217 15:15:46.739774 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:15:47.360059 master-0 kubenswrapper[26425]: I0217 15:15:47.359976 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:15:47.453605 master-0 kubenswrapper[26425]: I0217 15:15:47.453514 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:47.461671 master-0 kubenswrapper[26425]: I0217 15:15:47.461601 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:15:47.767122 master-0 kubenswrapper[26425]: I0217 15:15:47.767022 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:15:47.773192 master-0 kubenswrapper[26425]: I0217 15:15:47.773138 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs" Feb 17 15:15:47.851360 master-0 kubenswrapper[26425]: I0217 15:15:47.851297 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:47.852779 master-0 kubenswrapper[26425]: E0217 15:15:47.852753 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:47.852882 master-0 kubenswrapper[26425]: E0217 15:15:47.852868 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:47.852993 master-0 kubenswrapper[26425]: E0217 15:15:47.852979 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:15:51.852960598 +0000 UTC m=+13.744684426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:47.909757 master-0 kubenswrapper[26425]: I0217 15:15:47.909683 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:47.909953 master-0 kubenswrapper[26425]: I0217 15:15:47.909865 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:47.914795 master-0 kubenswrapper[26425]: I0217 15:15:47.914763 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:15:47.948996 master-0 kubenswrapper[26425]: I0217 15:15:47.948929 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:47.994832 master-0 kubenswrapper[26425]: I0217 15:15:47.994798 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:48.071549 master-0 kubenswrapper[26425]: I0217 15:15:48.071415 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:48.073659 master-0 kubenswrapper[26425]: I0217 15:15:48.073617 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:15:48.390996 master-0 kubenswrapper[26425]: I0217 15:15:48.390949 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:48.451408 master-0 kubenswrapper[26425]: I0217 15:15:48.450755 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:48.682569 master-0 kubenswrapper[26425]: I0217 15:15:48.682370 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:48.684311 master-0 kubenswrapper[26425]: I0217 15:15:48.684277 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:15:48.750314 master-0 kubenswrapper[26425]: I0217 15:15:48.750244 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:48.750314 master-0 kubenswrapper[26425]: I0217 15:15:48.750283 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:48.883237 master-0 kubenswrapper[26425]: I0217 15:15:48.882625 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:15:48.885861 master-0 kubenswrapper[26425]: I0217 15:15:48.885825 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-f25s7" Feb 17 15:15:48.928991 master-0 kubenswrapper[26425]: I0217 15:15:48.928925 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6bd884947c-tdlbn" Feb 17 15:15:48.933293 master-0 kubenswrapper[26425]: I0217 15:15:48.933169 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:48.938705 master-0 kubenswrapper[26425]: I0217 15:15:48.938538 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:15:49.334308 master-0 kubenswrapper[26425]: I0217 15:15:49.334137 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:49.362089 master-0 kubenswrapper[26425]: I0217 15:15:49.362054 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:15:49.680749 master-0 kubenswrapper[26425]: I0217 15:15:49.680685 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:49.685658 master-0 kubenswrapper[26425]: I0217 15:15:49.685614 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm" Feb 17 15:15:49.720933 master-0 kubenswrapper[26425]: I0217 15:15:49.720882 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:15:49.721180 master-0 kubenswrapper[26425]: I0217 15:15:49.721144 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" containerID="cri-o://4b556a21109d55e0fc1179b5cad47796ec1a964c7618f1e0977b12773c406661" gracePeriod=5 Feb 17 15:15:49.755072 master-0 kubenswrapper[26425]: I0217 15:15:49.755023 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:50.024724 master-0 kubenswrapper[26425]: I0217 15:15:50.024614 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:15:50.024979 master-0 kubenswrapper[26425]: I0217 15:15:50.024945 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" containerID="cri-o://cb3dbeb96630f3d5109d6c4e5a32fbf46326a5066238f4c05eb31fd67e0570ad" gracePeriod=30 Feb 17 15:15:50.025106 master-0 kubenswrapper[26425]: I0217 15:15:50.025080 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" containerID="cri-o://24bcd9a1fa449d31774c0b2f9747f9f7a7d21ce729de71f7dbfd671b89feec54" gracePeriod=30 Feb 17 15:15:50.025148 master-0 kubenswrapper[26425]: I0217 15:15:50.025123 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" containerID="cri-o://a52477200afc38c91a493a196c8111943fbf6121e870a10ff7e849d590f6609a" gracePeriod=30 Feb 17 15:15:50.025187 master-0 kubenswrapper[26425]: I0217 15:15:50.025154 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" containerID="cri-o://7dd053c55331a8a0d792d5a78e488f015a947989e3e1383dcd1a64fa486a01e5" gracePeriod=30 Feb 17 15:15:50.025187 master-0 kubenswrapper[26425]: I0217 15:15:50.025181 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" containerID="cri-o://9c473e6b1c42e4e97ed6d31b0e52ea86736af7b5464544e2ffea713e961e55df" gracePeriod=30 Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.039770 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040058 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040076 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040098 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040109 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040118 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040127 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040145 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040153 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040163 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040171 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040181 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040191 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040203 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040210 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040225 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040233 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040244 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040251 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040261 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040269 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040278 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040286 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040298 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040306 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040316 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040323 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040331 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040339 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040351 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040359 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040369 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040376 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040388 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040395 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040409 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040417 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: E0217 15:15:50.040433 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040442 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040610 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040629 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040639 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3daf534-9a77-49c6-964f-d402c5d5a2ac" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040649 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0dcd0f-f7e6-4d6d-bd6a-aff7ff1f8f4a" containerName="assisted-installer-controller" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040665 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5655115-c223-42ed-a93d-9d609e55c901" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040677 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040687 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a162205-f111-49b4-9f46-0b40b6184336" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040699 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040745 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040759 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040770 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040784 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b452fc-5e99-4947-a722-e47a602ac144" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040793 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="03da22e3-956d-4c8a-bfd6-c1778e5d627c" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040805 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040817 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040828 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" containerName="collect-profiles" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040841 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de71cc1-08c3-4295-ac86-745c9d4fbb46" containerName="installer" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040852 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 17 15:15:50.042145 master-0 kubenswrapper[26425]: I0217 15:15:50.040862 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="580b240a-a806-454d-ab19-8f193a8d9ca2" containerName="installer" Feb 17 15:15:50.072815 master-0 kubenswrapper[26425]: I0217 15:15:50.072776 26425 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 17 15:15:50.072928 master-0 kubenswrapper[26425]: I0217 15:15:50.072822 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 17 15:15:50.187788 master-0 kubenswrapper[26425]: I0217 15:15:50.187730 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:50.189414 master-0 kubenswrapper[26425]: I0217 15:15:50.189387 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:15:50.209851 master-0 kubenswrapper[26425]: I0217 15:15:50.209754 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.209851 master-0 kubenswrapper[26425]: I0217 15:15:50.209789 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.209851 master-0 kubenswrapper[26425]: I0217 15:15:50.209834 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.209990 master-0 kubenswrapper[26425]: I0217 15:15:50.209881 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.209990 master-0 kubenswrapper[26425]: I0217 15:15:50.209915 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.209990 master-0 kubenswrapper[26425]: I0217 15:15:50.209930 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.310867 master-0 kubenswrapper[26425]: I0217 15:15:50.310705 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.310867 master-0 kubenswrapper[26425]: I0217 15:15:50.310757 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.310867 master-0 kubenswrapper[26425]: I0217 15:15:50.310838 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311300 master-0 kubenswrapper[26425]: I0217 15:15:50.310913 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311300 master-0 kubenswrapper[26425]: I0217 15:15:50.310990 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311300 master-0 kubenswrapper[26425]: I0217 15:15:50.311052 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311300 master-0 kubenswrapper[26425]: I0217 15:15:50.311110 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311300 master-0 kubenswrapper[26425]: I0217 15:15:50.311283 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311555 master-0 kubenswrapper[26425]: I0217 15:15:50.311347 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311683 master-0 kubenswrapper[26425]: I0217 15:15:50.311660 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311848 master-0 kubenswrapper[26425]: I0217 15:15:50.311809 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.311917 master-0 kubenswrapper[26425]: I0217 15:15:50.311874 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 17 15:15:50.688756 master-0 kubenswrapper[26425]: I0217 15:15:50.688714 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:50.760813 master-0 kubenswrapper[26425]: I0217 15:15:50.760775 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 17 15:15:50.761653 master-0 kubenswrapper[26425]: I0217 15:15:50.761628 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 17 15:15:50.763759 master-0 kubenswrapper[26425]: I0217 15:15:50.763706 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="24bcd9a1fa449d31774c0b2f9747f9f7a7d21ce729de71f7dbfd671b89feec54" exitCode=2 Feb 17 15:15:50.763759 master-0 kubenswrapper[26425]: I0217 15:15:50.763751 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="a52477200afc38c91a493a196c8111943fbf6121e870a10ff7e849d590f6609a" exitCode=0 Feb 17 15:15:50.763843 master-0 kubenswrapper[26425]: I0217 15:15:50.763762 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="7dd053c55331a8a0d792d5a78e488f015a947989e3e1383dcd1a64fa486a01e5" exitCode=2 Feb 17 15:15:50.763923 master-0 kubenswrapper[26425]: I0217 15:15:50.763893 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:51.720500 master-0 kubenswrapper[26425]: I0217 15:15:51.720421 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:51.881424 master-0 kubenswrapper[26425]: I0217 15:15:51.879414 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:51.928118 master-0 kubenswrapper[26425]: I0217 15:15:51.928038 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7dzgz" Feb 17 15:15:51.933169 master-0 kubenswrapper[26425]: I0217 15:15:51.933089 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:51.933390 master-0 kubenswrapper[26425]: E0217 15:15:51.933349 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:51.933390 master-0 kubenswrapper[26425]: E0217 15:15:51.933382 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:51.933548 master-0 kubenswrapper[26425]: E0217 15:15:51.933480 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:15:59.933442673 +0000 UTC m=+21.825166491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:52.060873 master-0 kubenswrapper[26425]: I0217 15:15:52.060760 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:52.096499 master-0 kubenswrapper[26425]: I0217 15:15:52.096438 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:52.239809 master-0 kubenswrapper[26425]: I0217 15:15:52.239754 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:52.239981 master-0 kubenswrapper[26425]: I0217 15:15:52.239924 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:15:52.242349 master-0 kubenswrapper[26425]: I0217 15:15:52.242326 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-g8w2f" Feb 17 15:15:52.244137 master-0 kubenswrapper[26425]: I0217 15:15:52.244091 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:52.247604 master-0 kubenswrapper[26425]: I0217 15:15:52.247587 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:15:52.385767 master-0 kubenswrapper[26425]: I0217 15:15:52.385655 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:52.437519 master-0 kubenswrapper[26425]: I0217 15:15:52.437466 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t8vtc" Feb 17 15:15:54.440363 master-0 kubenswrapper[26425]: I0217 15:15:54.440289 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:54.484764 master-0 kubenswrapper[26425]: I0217 15:15:54.484669 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzsv7" Feb 17 15:15:54.794075 master-0 kubenswrapper[26425]: I0217 15:15:54.794026 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebf941eaba3a97825b1c8002f4b27a20/startup-monitor/0.log" Feb 17 15:15:54.794604 master-0 kubenswrapper[26425]: I0217 15:15:54.794564 26425 generic.go:334] "Generic (PLEG): container finished" podID="ebf941eaba3a97825b1c8002f4b27a20" containerID="4b556a21109d55e0fc1179b5cad47796ec1a964c7618f1e0977b12773c406661" exitCode=137 Feb 17 15:15:55.328428 master-0 kubenswrapper[26425]: I0217 15:15:55.328361 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebf941eaba3a97825b1c8002f4b27a20/startup-monitor/0.log" Feb 17 15:15:55.328697 master-0 kubenswrapper[26425]: I0217 15:15:55.328515 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:55.483440 master-0 kubenswrapper[26425]: I0217 15:15:55.483402 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:55.484934 master-0 kubenswrapper[26425]: I0217 15:15:55.484877 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 17 15:15:55.485013 master-0 kubenswrapper[26425]: I0217 15:15:55.484948 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 17 15:15:55.485070 master-0 kubenswrapper[26425]: I0217 15:15:55.485010 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 17 15:15:55.485119 master-0 kubenswrapper[26425]: I0217 15:15:55.485068 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 17 15:15:55.485119 master-0 kubenswrapper[26425]: I0217 15:15:55.485057 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests" (OuterVolumeSpecName: "manifests") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:55.485222 master-0 kubenswrapper[26425]: I0217 15:15:55.485164 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:55.485222 master-0 kubenswrapper[26425]: I0217 15:15:55.485169 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 17 15:15:55.485404 master-0 kubenswrapper[26425]: I0217 15:15:55.485377 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log" (OuterVolumeSpecName: "var-log") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:55.485590 master-0 kubenswrapper[26425]: I0217 15:15:55.485426 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:55.486000 master-0 kubenswrapper[26425]: I0217 15:15:55.485951 26425 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:55.486000 master-0 kubenswrapper[26425]: I0217 15:15:55.485991 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:55.486116 master-0 kubenswrapper[26425]: I0217 15:15:55.486010 26425 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:55.486116 master-0 kubenswrapper[26425]: I0217 15:15:55.486027 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:55.494318 master-0 kubenswrapper[26425]: I0217 15:15:55.494252 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:15:55.541566 master-0 kubenswrapper[26425]: I0217 15:15:55.541446 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2lg56" Feb 17 15:15:55.587601 master-0 kubenswrapper[26425]: I0217 15:15:55.587517 26425 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:15:55.807579 master-0 kubenswrapper[26425]: I0217 15:15:55.807409 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebf941eaba3a97825b1c8002f4b27a20/startup-monitor/0.log" Feb 17 15:15:55.808284 master-0 kubenswrapper[26425]: I0217 15:15:55.808090 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:15:55.808284 master-0 kubenswrapper[26425]: I0217 15:15:55.808106 26425 scope.go:117] "RemoveContainer" containerID="4b556a21109d55e0fc1179b5cad47796ec1a964c7618f1e0977b12773c406661" Feb 17 15:15:56.409663 master-0 kubenswrapper[26425]: I0217 15:15:56.409538 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf941eaba3a97825b1c8002f4b27a20" path="/var/lib/kubelet/pods/ebf941eaba3a97825b1c8002f4b27a20/volumes" Feb 17 15:15:56.410181 master-0 kubenswrapper[26425]: I0217 15:15:56.410118 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:15:59.956341 master-0 kubenswrapper[26425]: I0217 15:15:59.956262 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:15:59.957349 master-0 kubenswrapper[26425]: E0217 15:15:59.956660 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:59.957349 master-0 kubenswrapper[26425]: E0217 15:15:59.956718 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:15:59.957349 master-0 kubenswrapper[26425]: E0217 15:15:59.956797 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:16:15.956768223 +0000 UTC m=+37.848492071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: I0217 15:16:00.693398 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:00.693591 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:00.694623 master-0 kubenswrapper[26425]: I0217 15:16:00.693613 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:04.893677 master-0 kubenswrapper[26425]: I0217 15:16:04.893611 26425 generic.go:334] "Generic (PLEG): container finished" podID="70e43034-56d0-4fb2-8886-deb00b625686" containerID="762936faf720fbf8fc66c224dfa462878affad1249ed16705950254bc5043c3c" exitCode=0 Feb 17 15:16:04.898840 master-0 kubenswrapper[26425]: I0217 15:16:04.898775 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:16:04.898936 master-0 kubenswrapper[26425]: I0217 15:16:04.898867 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd" exitCode=1 Feb 17 15:16:06.695918 master-0 kubenswrapper[26425]: I0217 15:16:06.695869 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:06.696675 master-0 kubenswrapper[26425]: I0217 15:16:06.696636 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: I0217 15:16:09.701527 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:09.702297 master-0 kubenswrapper[26425]: I0217 15:16:09.701650 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:10.361041 master-0 kubenswrapper[26425]: E0217 15:16:10.360907 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:16:12.287789 master-0 kubenswrapper[26425]: I0217 15:16:12.287676 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:12.288683 master-0 kubenswrapper[26425]: I0217 15:16:12.287807 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:16.018519 master-0 kubenswrapper[26425]: I0217 15:16:16.018413 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:16:16.019800 master-0 kubenswrapper[26425]: E0217 15:16:16.018667 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:16.019957 master-0 kubenswrapper[26425]: E0217 15:16:16.019934 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:16.020185 master-0 kubenswrapper[26425]: E0217 15:16:16.020162 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:16:48.02013349 +0000 UTC m=+69.911857338 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:16.696881 master-0 kubenswrapper[26425]: I0217 15:16:16.696798 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:16.697321 master-0 kubenswrapper[26425]: I0217 15:16:16.696893 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: I0217 15:16:18.709401 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:18.709523 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:18.710915 master-0 kubenswrapper[26425]: I0217 15:16:18.709539 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:20.361789 master-0 kubenswrapper[26425]: E0217 15:16:20.361667 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:16:20.640972 master-0 kubenswrapper[26425]: I0217 15:16:20.640786 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 17 15:16:20.642438 master-0 kubenswrapper[26425]: I0217 15:16:20.642377 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 17 15:16:20.643715 master-0 kubenswrapper[26425]: I0217 15:16:20.643658 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 17 15:16:20.644380 master-0 kubenswrapper[26425]: I0217 15:16:20.644322 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 17 15:16:20.646406 master-0 kubenswrapper[26425]: I0217 15:16:20.646341 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:16:20.792506 master-0 kubenswrapper[26425]: I0217 15:16:20.792365 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792506 master-0 kubenswrapper[26425]: I0217 15:16:20.792446 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792506 master-0 kubenswrapper[26425]: I0217 15:16:20.792526 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792568 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792650 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792681 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792738 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792642 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir" (OuterVolumeSpecName: "log-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792709 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792815 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir" (OuterVolumeSpecName: "data-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.792993 master-0 kubenswrapper[26425]: I0217 15:16:20.792855 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.792998 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793290 26425 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793324 26425 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793349 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793374 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793397 26425 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:20.793643 master-0 kubenswrapper[26425]: I0217 15:16:20.793420 26425 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:16:21.036387 master-0 kubenswrapper[26425]: I0217 15:16:21.036208 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 17 15:16:21.037653 master-0 kubenswrapper[26425]: I0217 15:16:21.037591 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 17 15:16:21.038664 master-0 kubenswrapper[26425]: I0217 15:16:21.038608 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 17 15:16:21.039322 master-0 kubenswrapper[26425]: I0217 15:16:21.039260 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 17 15:16:21.041208 master-0 kubenswrapper[26425]: I0217 15:16:21.041135 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="9c473e6b1c42e4e97ed6d31b0e52ea86736af7b5464544e2ffea713e961e55df" exitCode=137 Feb 17 15:16:21.041208 master-0 kubenswrapper[26425]: I0217 15:16:21.041177 26425 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="cb3dbeb96630f3d5109d6c4e5a32fbf46326a5066238f4c05eb31fd67e0570ad" exitCode=137 Feb 17 15:16:21.041437 master-0 kubenswrapper[26425]: I0217 15:16:21.041339 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:16:22.287955 master-0 kubenswrapper[26425]: I0217 15:16:22.287834 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:22.287955 master-0 kubenswrapper[26425]: I0217 15:16:22.287929 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:24.031932 master-0 kubenswrapper[26425]: E0217 15:16:24.031686 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18951192a8349648 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.025074248 +0000 UTC m=+11.916798066,LastTimestamp:2026-02-17 15:15:50.025074248 +0000 UTC m=+11.916798066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:16:26.696663 master-0 kubenswrapper[26425]: I0217 15:16:26.696574 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:26.697429 master-0 kubenswrapper[26425]: I0217 15:16:26.696660 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: I0217 15:16:27.717724 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:27.717797 master-0 kubenswrapper[26425]: I0217 15:16:27.717786 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:30.362209 master-0 kubenswrapper[26425]: E0217 15:16:30.362129 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:16:30.413137 master-0 kubenswrapper[26425]: E0217 15:16:30.413036 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:16:30.413492 master-0 kubenswrapper[26425]: E0217 15:16:30.413332 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 17 15:16:30.413635 master-0 kubenswrapper[26425]: I0217 15:16:30.413563 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:16:30.413635 master-0 kubenswrapper[26425]: I0217 15:16:30.413617 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:16:30.413897 master-0 kubenswrapper[26425]: I0217 15:16:30.413734 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:16:30.438195 master-0 kubenswrapper[26425]: I0217 15:16:30.438086 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401699cb53e7098157e808a83125b0e4" path="/var/lib/kubelet/pods/401699cb53e7098157e808a83125b0e4/volumes" Feb 17 15:16:30.439674 master-0 kubenswrapper[26425]: I0217 15:16:30.439622 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:16:30.455778 master-0 kubenswrapper[26425]: I0217 15:16:30.455705 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:16:30.455778 master-0 kubenswrapper[26425]: I0217 15:16:30.455761 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:16:32.287898 master-0 kubenswrapper[26425]: I0217 15:16:32.287792 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:32.288659 master-0 kubenswrapper[26425]: I0217 15:16:32.287911 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:33.154047 master-0 kubenswrapper[26425]: I0217 15:16:33.153989 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_a3b6a099-f52a-428a-af09-d1842ce66891/installer/0.log" Feb 17 15:16:33.154283 master-0 kubenswrapper[26425]: I0217 15:16:33.154065 26425 generic.go:334] "Generic (PLEG): container finished" podID="a3b6a099-f52a-428a-af09-d1842ce66891" containerID="ceb525f1242f942ba65ca3fefc2acf99f57e68a8145b1bffbd29b61c0bf59b29" exitCode=1 Feb 17 15:16:36.696882 master-0 kubenswrapper[26425]: I0217 15:16:36.696768 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:36.696882 master-0 kubenswrapper[26425]: I0217 15:16:36.696854 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: I0217 15:16:36.724613 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:36.724685 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:36.725124 master-0 kubenswrapper[26425]: I0217 15:16:36.724706 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:38.333642 master-0 kubenswrapper[26425]: I0217 15:16:38.333527 26425 scope.go:117] "RemoveContainer" containerID="7dd053c55331a8a0d792d5a78e488f015a947989e3e1383dcd1a64fa486a01e5" Feb 17 15:16:38.370159 master-0 kubenswrapper[26425]: I0217 15:16:38.370118 26425 scope.go:117] "RemoveContainer" containerID="cb3dbeb96630f3d5109d6c4e5a32fbf46326a5066238f4c05eb31fd67e0570ad" Feb 17 15:16:38.396106 master-0 kubenswrapper[26425]: I0217 15:16:38.396050 26425 scope.go:117] "RemoveContainer" containerID="2a42298516500c9bfa084c410231d2a27dee7fceed15779f0b27fd9d1349b2b0" Feb 17 15:16:38.426509 master-0 kubenswrapper[26425]: I0217 15:16:38.426432 26425 scope.go:117] "RemoveContainer" containerID="d66ebdf4bf1f41618550520db8e8e13eb193e9411ec23799b8b482aae939538d" Feb 17 15:16:38.458079 master-0 kubenswrapper[26425]: I0217 15:16:38.458027 26425 scope.go:117] "RemoveContainer" containerID="a52477200afc38c91a493a196c8111943fbf6121e870a10ff7e849d590f6609a" Feb 17 15:16:38.487024 master-0 kubenswrapper[26425]: I0217 15:16:38.486970 26425 scope.go:117] "RemoveContainer" containerID="bafb1d40abea56e15a55f39238f52822a8e7d4c344f770507c71ed614feff320" Feb 17 15:16:38.512391 master-0 kubenswrapper[26425]: I0217 15:16:38.512338 26425 scope.go:117] "RemoveContainer" containerID="24bcd9a1fa449d31774c0b2f9747f9f7a7d21ce729de71f7dbfd671b89feec54" Feb 17 15:16:38.544113 master-0 kubenswrapper[26425]: I0217 15:16:38.544049 26425 scope.go:117] "RemoveContainer" containerID="af8466a0f113f0fd847f0bfc35cfb14199d76e2d0ce6a9816135658a53c788cd" Feb 17 15:16:38.577274 master-0 kubenswrapper[26425]: I0217 15:16:38.577213 26425 scope.go:117] "RemoveContainer" containerID="9c473e6b1c42e4e97ed6d31b0e52ea86736af7b5464544e2ffea713e961e55df" Feb 17 15:16:40.364307 master-0 kubenswrapper[26425]: E0217 15:16:40.364174 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:16:45.257284 master-0 kubenswrapper[26425]: I0217 15:16:45.257192 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/1.log" Feb 17 15:16:45.258429 master-0 kubenswrapper[26425]: I0217 15:16:45.258077 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/0.log" Feb 17 15:16:45.258828 master-0 kubenswrapper[26425]: I0217 15:16:45.258725 26425 generic.go:334] "Generic (PLEG): container finished" podID="7c6b911d-8db2-48e8-bce9-d4bcde1f55a0" containerID="be8f29548cec98725a9fe2f2e764da4e1fd8b3547c172ac45765b13bbbf51c52" exitCode=1 Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: I0217 15:16:45.732061 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:45.732171 master-0 kubenswrapper[26425]: I0217 15:16:45.732138 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:46.696117 master-0 kubenswrapper[26425]: I0217 15:16:46.696043 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:46.696979 master-0 kubenswrapper[26425]: I0217 15:16:46.696744 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:48.114762 master-0 kubenswrapper[26425]: I0217 15:16:48.114672 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:16:48.115689 master-0 kubenswrapper[26425]: E0217 15:16:48.114899 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:48.115689 master-0 kubenswrapper[26425]: E0217 15:16:48.114951 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:48.115689 master-0 kubenswrapper[26425]: E0217 15:16:48.115039 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:17:52.115003837 +0000 UTC m=+134.006727695 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:16:50.189341 master-0 kubenswrapper[26425]: I0217 15:16:50.189257 26425 status_manager.go:851] "Failed to get status for pod" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods marketplace-operator-6cc5b65c6b-wqxmh)" Feb 17 15:16:50.365842 master-0 kubenswrapper[26425]: E0217 15:16:50.365535 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:16:50.365842 master-0 kubenswrapper[26425]: I0217 15:16:50.365622 26425 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: I0217 15:16:54.738055 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:16:54.738163 master-0 kubenswrapper[26425]: I0217 15:16:54.738138 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:16:56.696691 master-0 kubenswrapper[26425]: I0217 15:16:56.696602 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:16:56.697768 master-0 kubenswrapper[26425]: I0217 15:16:56.696708 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:16:58.034870 master-0 kubenswrapper[26425]: E0217 15:16:58.034640 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18951192a8354157 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.025118039 +0000 UTC m=+11.916841857,LastTimestamp:2026-02-17 15:15:50.025118039 +0000 UTC m=+11.916841857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:17:00.366730 master-0 kubenswrapper[26425]: E0217 15:17:00.366622 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: I0217 15:17:03.745587 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:03.745682 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:03.747332 master-0 kubenswrapper[26425]: I0217 15:17:03.745679 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:04.442740 master-0 kubenswrapper[26425]: E0217 15:17:04.442634 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:17:04.443049 master-0 kubenswrapper[26425]: E0217 15:17:04.442971 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.029s" Feb 17 15:17:04.457733 master-0 kubenswrapper[26425]: I0217 15:17:04.457659 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:17:04.458508 master-0 kubenswrapper[26425]: E0217 15:17:04.458422 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 17 15:17:04.459153 master-0 kubenswrapper[26425]: I0217 15:17:04.459102 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 17 15:17:04.493232 master-0 kubenswrapper[26425]: W0217 15:17:04.493165 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-4fdbb0e3f6f5d5f76b963148da342174a5211018be79b6c667e48791f719b4bf WatchSource:0}: Error finding container 4fdbb0e3f6f5d5f76b963148da342174a5211018be79b6c667e48791f719b4bf: Status 404 returned error can't find the container with id 4fdbb0e3f6f5d5f76b963148da342174a5211018be79b6c667e48791f719b4bf Feb 17 15:17:05.439748 master-0 kubenswrapper[26425]: I0217 15:17:05.439688 26425 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="7d5bbe35353878dc65758a0ca44e388ed895cebe20ab313a7b7befbc3305a9c8" exitCode=0 Feb 17 15:17:06.696739 master-0 kubenswrapper[26425]: I0217 15:17:06.696652 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:06.697639 master-0 kubenswrapper[26425]: I0217 15:17:06.696751 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:10.568125 master-0 kubenswrapper[26425]: E0217 15:17:10.567962 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: I0217 15:17:13.275382 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:13.275545 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:13.277055 master-0 kubenswrapper[26425]: I0217 15:17:13.275549 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:16.696626 master-0 kubenswrapper[26425]: I0217 15:17:16.696513 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:16.696626 master-0 kubenswrapper[26425]: I0217 15:17:16.696598 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:20.970104 master-0 kubenswrapper[26425]: E0217 15:17:20.969946 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: I0217 15:17:22.283887 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:22.283957 master-0 kubenswrapper[26425]: I0217 15:17:22.283954 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:26.642365 master-0 kubenswrapper[26425]: I0217 15:17:26.642288 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/1.log" Feb 17 15:17:26.643489 master-0 kubenswrapper[26425]: I0217 15:17:26.643366 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/0.log" Feb 17 15:17:26.644246 master-0 kubenswrapper[26425]: I0217 15:17:26.644157 26425 generic.go:334] "Generic (PLEG): container finished" podID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerID="60c37bbe21721a193105735329bdb72d13d00d18b75bdb6198c01ec145d996cc" exitCode=1 Feb 17 15:17:26.695984 master-0 kubenswrapper[26425]: I0217 15:17:26.695885 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:26.695984 master-0 kubenswrapper[26425]: I0217 15:17:26.695967 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:27.655834 master-0 kubenswrapper[26425]: I0217 15:17:27.655759 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:17:27.656862 master-0 kubenswrapper[26425]: I0217 15:17:27.656555 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/2.log" Feb 17 15:17:27.656862 master-0 kubenswrapper[26425]: I0217 15:17:27.656611 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="ef80e89f464f2fddabc8382f1aaea540a66323e02f01f8d399ba62bafcf783cc" exitCode=1 Feb 17 15:17:28.683141 master-0 kubenswrapper[26425]: I0217 15:17:28.683070 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:28.684002 master-0 kubenswrapper[26425]: I0217 15:17:28.683153 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: I0217 15:17:31.291784 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:31.291902 master-0 kubenswrapper[26425]: I0217 15:17:31.291884 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:31.772014 master-0 kubenswrapper[26425]: E0217 15:17:31.771905 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="1.6s" Feb 17 15:17:32.038079 master-0 kubenswrapper[26425]: E0217 15:17:32.037743 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18951192a835bdf3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Killing,Message:Stopping container etcd-metrics,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.025149939 +0000 UTC m=+11.916873757,LastTimestamp:2026-02-17 15:15:50.025149939 +0000 UTC m=+11.916873757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:17:33.718551 master-0 kubenswrapper[26425]: I0217 15:17:33.718418 26425 generic.go:334] "Generic (PLEG): container finished" podID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerID="2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608" exitCode=0 Feb 17 15:17:36.695861 master-0 kubenswrapper[26425]: I0217 15:17:36.695759 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:36.695861 master-0 kubenswrapper[26425]: I0217 15:17:36.695820 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:38.461002 master-0 kubenswrapper[26425]: E0217 15:17:38.460917 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:17:38.462049 master-0 kubenswrapper[26425]: E0217 15:17:38.461223 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Feb 17 15:17:38.462049 master-0 kubenswrapper[26425]: I0217 15:17:38.461332 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:17:38.474929 master-0 kubenswrapper[26425]: I0217 15:17:38.474853 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:17:38.683537 master-0 kubenswrapper[26425]: I0217 15:17:38.683428 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:38.683818 master-0 kubenswrapper[26425]: I0217 15:17:38.683564 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:38.699848 master-0 kubenswrapper[26425]: I0217 15:17:38.699766 26425 scope.go:117] "RemoveContainer" containerID="43796d7d27cac90e31c0e4d2ee9bf43eddeb31538289e18b8ee843798af029b2" Feb 17 15:17:38.753333 master-0 kubenswrapper[26425]: I0217 15:17:38.753013 26425 scope.go:117] "RemoveContainer" containerID="55d3b1057ac7a6ad2c1bad42aa92f8880f4cec28c612f7db8db1627fa4374902" Feb 17 15:17:38.808653 master-0 kubenswrapper[26425]: I0217 15:17:38.808592 26425 scope.go:117] "RemoveContainer" containerID="39e5d190c1de962c17b93f9f892d9c95fb301c2b359b235051f10e8c679da55c" Feb 17 15:17:38.866178 master-0 kubenswrapper[26425]: I0217 15:17:38.865844 26425 scope.go:117] "RemoveContainer" containerID="e039cb4463938f81d7404a930ef7ab4b00269f6ed6b9151f252951ea9d381dc4" Feb 17 15:17:39.315065 master-0 kubenswrapper[26425]: I0217 15:17:39.314897 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:39.315065 master-0 kubenswrapper[26425]: I0217 15:17:39.315008 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:39.774151 master-0 kubenswrapper[26425]: I0217 15:17:39.774047 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/1.log" Feb 17 15:17:39.778020 master-0 kubenswrapper[26425]: I0217 15:17:39.777954 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:17:39.781353 master-0 kubenswrapper[26425]: I0217 15:17:39.781293 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/1.log" Feb 17 15:17:39.859849 master-0 kubenswrapper[26425]: I0217 15:17:39.859746 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:17:39.859849 master-0 kubenswrapper[26425]: I0217 15:17:39.859843 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:17:40.188772 master-0 kubenswrapper[26425]: I0217 15:17:40.188695 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:17:40.189030 master-0 kubenswrapper[26425]: I0217 15:17:40.188781 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: I0217 15:17:40.299385 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:40.299494 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:40.300316 master-0 kubenswrapper[26425]: I0217 15:17:40.299556 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:43.373582 master-0 kubenswrapper[26425]: E0217 15:17:43.373408 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 17 15:17:46.697179 master-0 kubenswrapper[26425]: I0217 15:17:46.697074 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:46.697179 master-0 kubenswrapper[26425]: I0217 15:17:46.697159 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:48.683471 master-0 kubenswrapper[26425]: I0217 15:17:48.683330 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:48.684069 master-0 kubenswrapper[26425]: I0217 15:17:48.683504 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: I0217 15:17:49.307214 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:49.307308 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:49.308292 master-0 kubenswrapper[26425]: I0217 15:17:49.307304 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:49.859794 master-0 kubenswrapper[26425]: I0217 15:17:49.859714 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:17:49.860746 master-0 kubenswrapper[26425]: I0217 15:17:49.859823 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:17:50.188837 master-0 kubenswrapper[26425]: I0217 15:17:50.188744 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:17:50.189179 master-0 kubenswrapper[26425]: I0217 15:17:50.188840 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:17:50.190953 master-0 kubenswrapper[26425]: I0217 15:17:50.190855 26425 status_manager.go:851] "Failed to get status for pod" podUID="401699cb53e7098157e808a83125b0e4" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Feb 17 15:17:50.879103 master-0 kubenswrapper[26425]: I0217 15:17:50.879004 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:17:50.880736 master-0 kubenswrapper[26425]: I0217 15:17:50.880676 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/cluster-cloud-controller-manager/0.log" Feb 17 15:17:50.880874 master-0 kubenswrapper[26425]: I0217 15:17:50.880741 26425 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="7b0bc73a19929878c76a20f8913258b82b0659b1d457e21ec06a82cf6b136195" exitCode=1 Feb 17 15:17:52.195537 master-0 kubenswrapper[26425]: I0217 15:17:52.195408 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:17:52.196597 master-0 kubenswrapper[26425]: E0217 15:17:52.195683 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:17:52.196597 master-0 kubenswrapper[26425]: E0217 15:17:52.195731 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:17:52.196597 master-0 kubenswrapper[26425]: E0217 15:17:52.195805 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:19:54.195779397 +0000 UTC m=+256.087503255 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:17:52.900611 master-0 kubenswrapper[26425]: I0217 15:17:52.900529 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:17:52.901376 master-0 kubenswrapper[26425]: I0217 15:17:52.901348 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/config-sync-controllers/0.log" Feb 17 15:17:52.902088 master-0 kubenswrapper[26425]: I0217 15:17:52.902066 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/cluster-cloud-controller-manager/0.log" Feb 17 15:17:52.902237 master-0 kubenswrapper[26425]: I0217 15:17:52.902208 26425 generic.go:334] "Generic (PLEG): container finished" podID="14723cb7-2d96-42b7-b559-70386c4c841c" containerID="426e84564cdde730130665e18be2c56771ee413958b73511ab6a3d57c4226dd6" exitCode=1 Feb 17 15:17:56.575299 master-0 kubenswrapper[26425]: E0217 15:17:56.575174 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 17 15:17:56.696544 master-0 kubenswrapper[26425]: I0217 15:17:56.696354 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:17:56.696544 master-0 kubenswrapper[26425]: I0217 15:17:56.696502 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:17:56.938635 master-0 kubenswrapper[26425]: I0217 15:17:56.938555 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/1.log" Feb 17 15:17:56.940124 master-0 kubenswrapper[26425]: I0217 15:17:56.940061 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/0.log" Feb 17 15:17:56.940271 master-0 kubenswrapper[26425]: I0217 15:17:56.940157 26425 generic.go:334] "Generic (PLEG): container finished" podID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerID="e78076928670aead1e74a90bfe18141b9748ba5b397af907cd88d6d09ee87278" exitCode=1 Feb 17 15:17:58.072840 master-0 kubenswrapper[26425]: I0217 15:17:58.072742 26425 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:17:58.072840 master-0 kubenswrapper[26425]: I0217 15:17:58.072841 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: I0217 15:17:58.316690 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:17:58.316918 master-0 kubenswrapper[26425]: I0217 15:17:58.316794 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:17:58.683777 master-0 kubenswrapper[26425]: I0217 15:17:58.683679 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:58.683777 master-0 kubenswrapper[26425]: I0217 15:17:58.683766 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:58.960253 master-0 kubenswrapper[26425]: I0217 15:17:58.960092 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-hmpc7_b4422676-9a70-4973-8299-7b40a66e9c96/control-plane-machine-set-operator/0.log" Feb 17 15:17:58.960253 master-0 kubenswrapper[26425]: I0217 15:17:58.960186 26425 generic.go:334] "Generic (PLEG): container finished" podID="b4422676-9a70-4973-8299-7b40a66e9c96" containerID="b1199a6a02a6f0066cde070bc688012a60c6dbb64c28d3d555d30add6fcebc27" exitCode=1 Feb 17 15:17:59.314354 master-0 kubenswrapper[26425]: I0217 15:17:59.314164 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:17:59.314354 master-0 kubenswrapper[26425]: I0217 15:17:59.314256 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:17:59.859457 master-0 kubenswrapper[26425]: I0217 15:17:59.859400 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:17:59.859694 master-0 kubenswrapper[26425]: I0217 15:17:59.859493 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:18:00.188294 master-0 kubenswrapper[26425]: I0217 15:18:00.188184 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:18:00.188294 master-0 kubenswrapper[26425]: I0217 15:18:00.188273 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:18:06.041991 master-0 kubenswrapper[26425]: E0217 15:18:06.041778 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18951192a8362618 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.0251766 +0000 UTC m=+11.916900418,LastTimestamp:2026-02-17 15:15:50.0251766 +0000 UTC m=+11.916900418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:18:06.696311 master-0 kubenswrapper[26425]: I0217 15:18:06.696221 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 17 15:18:06.696311 master-0 kubenswrapper[26425]: I0217 15:18:06.696304 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: I0217 15:18:07.324386 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:07.324533 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:07.326341 master-0 kubenswrapper[26425]: I0217 15:18:07.324539 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:08.072227 master-0 kubenswrapper[26425]: I0217 15:18:08.072094 26425 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-4n2ls container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Feb 17 15:18:08.072227 master-0 kubenswrapper[26425]: I0217 15:18:08.072201 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" podUID="50c51fe2-32aa-430f-8da0-7cf3b9519131" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Feb 17 15:18:08.683416 master-0 kubenswrapper[26425]: I0217 15:18:08.683311 26425 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-jdfsm container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Feb 17 15:18:08.684326 master-0 kubenswrapper[26425]: I0217 15:18:08.683416 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" podUID="68954d1e-2147-4465-9817-a3c04cbc19b0" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Feb 17 15:18:10.188850 master-0 kubenswrapper[26425]: I0217 15:18:10.188760 26425 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-wqxmh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Feb 17 15:18:10.189771 master-0 kubenswrapper[26425]: I0217 15:18:10.188858 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Feb 17 15:18:12.478200 master-0 kubenswrapper[26425]: E0217 15:18:12.478078 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:18:12.479165 master-0 kubenswrapper[26425]: E0217 15:18:12.478416 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 17 15:18:12.479165 master-0 kubenswrapper[26425]: I0217 15:18:12.478478 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:12.479165 master-0 kubenswrapper[26425]: I0217 15:18:12.478519 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"70e43034-56d0-4fb2-8886-deb00b625686","Type":"ContainerDied","Data":"762936faf720fbf8fc66c224dfa462878affad1249ed16705950254bc5043c3c"} Feb 17 15:18:12.479165 master-0 kubenswrapper[26425]: I0217 15:18:12.478559 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:18:12.479165 master-0 kubenswrapper[26425]: I0217 15:18:12.478580 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:18:12.479708 master-0 kubenswrapper[26425]: I0217 15:18:12.479433 26425 scope.go:117] "RemoveContainer" containerID="2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608" Feb 17 15:18:12.482305 master-0 kubenswrapper[26425]: I0217 15:18:12.482262 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:18:12.482401 master-0 kubenswrapper[26425]: I0217 15:18:12.482327 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdgrn" Feb 17 15:18:12.482401 master-0 kubenswrapper[26425]: I0217 15:18:12.482342 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd"} Feb 17 15:18:12.484298 master-0 kubenswrapper[26425]: I0217 15:18:12.482826 26425 scope.go:117] "RemoveContainer" containerID="b1199a6a02a6f0066cde070bc688012a60c6dbb64c28d3d555d30add6fcebc27" Feb 17 15:18:12.486690 master-0 kubenswrapper[26425]: I0217 15:18:12.485049 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:18:12.486690 master-0 kubenswrapper[26425]: I0217 15:18:12.485084 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:18:12.486690 master-0 kubenswrapper[26425]: I0217 15:18:12.485111 26425 scope.go:117] "RemoveContainer" containerID="a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd" Feb 17 15:18:12.486690 master-0 kubenswrapper[26425]: I0217 15:18:12.486140 26425 scope.go:117] "RemoveContainer" containerID="be8f29548cec98725a9fe2f2e764da4e1fd8b3547c172ac45765b13bbbf51c52" Feb 17 15:18:12.494003 master-0 kubenswrapper[26425]: I0217 15:18:12.492713 26425 scope.go:117] "RemoveContainer" containerID="7b0bc73a19929878c76a20f8913258b82b0659b1d457e21ec06a82cf6b136195" Feb 17 15:18:12.494003 master-0 kubenswrapper[26425]: I0217 15:18:12.492746 26425 scope.go:117] "RemoveContainer" containerID="426e84564cdde730130665e18be2c56771ee413958b73511ab6a3d57c4226dd6" Feb 17 15:18:12.494003 master-0 kubenswrapper[26425]: I0217 15:18:12.493368 26425 scope.go:117] "RemoveContainer" containerID="ef80e89f464f2fddabc8382f1aaea540a66323e02f01f8d399ba62bafcf783cc" Feb 17 15:18:12.495621 master-0 kubenswrapper[26425]: I0217 15:18:12.495015 26425 scope.go:117] "RemoveContainer" containerID="e78076928670aead1e74a90bfe18141b9748ba5b397af907cd88d6d09ee87278" Feb 17 15:18:12.496243 master-0 kubenswrapper[26425]: I0217 15:18:12.496196 26425 scope.go:117] "RemoveContainer" containerID="60c37bbe21721a193105735329bdb72d13d00d18b75bdb6198c01ec145d996cc" Feb 17 15:18:12.498150 master-0 kubenswrapper[26425]: I0217 15:18:12.497994 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:18:12.977389 master-0 kubenswrapper[26425]: E0217 15:18:12.976783 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:18:13.320708 master-0 kubenswrapper[26425]: I0217 15:18:13.320643 26425 generic.go:334] "Generic (PLEG): container finished" podID="31e31afc-79d5-46f4-9835-0fd11da9465f" containerID="e6582b397c9a839f2d6d03076dc105158f9bf90ad6efb080207cea9f74d8064c" exitCode=0 Feb 17 15:18:13.326086 master-0 kubenswrapper[26425]: I0217 15:18:13.326048 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:13.326273 master-0 kubenswrapper[26425]: I0217 15:18:13.326095 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" exitCode=0 Feb 17 15:18:13.712587 master-0 kubenswrapper[26425]: I0217 15:18:13.711015 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 17 15:18:13.718314 master-0 kubenswrapper[26425]: I0217 15:18:13.716409 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_a3b6a099-f52a-428a-af09-d1842ce66891/installer/0.log" Feb 17 15:18:13.718314 master-0 kubenswrapper[26425]: I0217 15:18:13.716498 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:18:13.844249 master-0 kubenswrapper[26425]: I0217 15:18:13.844154 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") pod \"70e43034-56d0-4fb2-8886-deb00b625686\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " Feb 17 15:18:13.844249 master-0 kubenswrapper[26425]: I0217 15:18:13.844248 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") pod \"a3b6a099-f52a-428a-af09-d1842ce66891\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " Feb 17 15:18:13.844537 master-0 kubenswrapper[26425]: I0217 15:18:13.844292 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") pod \"70e43034-56d0-4fb2-8886-deb00b625686\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " Feb 17 15:18:13.844537 master-0 kubenswrapper[26425]: I0217 15:18:13.844338 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") pod \"70e43034-56d0-4fb2-8886-deb00b625686\" (UID: \"70e43034-56d0-4fb2-8886-deb00b625686\") " Feb 17 15:18:13.844537 master-0 kubenswrapper[26425]: I0217 15:18:13.844401 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") pod \"a3b6a099-f52a-428a-af09-d1842ce66891\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " Feb 17 15:18:13.844711 master-0 kubenswrapper[26425]: I0217 15:18:13.844570 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock" (OuterVolumeSpecName: "var-lock") pod "a3b6a099-f52a-428a-af09-d1842ce66891" (UID: "a3b6a099-f52a-428a-af09-d1842ce66891"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:18:13.845294 master-0 kubenswrapper[26425]: I0217 15:18:13.845218 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70e43034-56d0-4fb2-8886-deb00b625686" (UID: "70e43034-56d0-4fb2-8886-deb00b625686"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:18:13.845294 master-0 kubenswrapper[26425]: I0217 15:18:13.845246 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock" (OuterVolumeSpecName: "var-lock") pod "70e43034-56d0-4fb2-8886-deb00b625686" (UID: "70e43034-56d0-4fb2-8886-deb00b625686"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:18:13.845449 master-0 kubenswrapper[26425]: I0217 15:18:13.845333 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3b6a099-f52a-428a-af09-d1842ce66891" (UID: "a3b6a099-f52a-428a-af09-d1842ce66891"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:18:13.846412 master-0 kubenswrapper[26425]: I0217 15:18:13.846358 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") pod \"a3b6a099-f52a-428a-af09-d1842ce66891\" (UID: \"a3b6a099-f52a-428a-af09-d1842ce66891\") " Feb 17 15:18:13.847144 master-0 kubenswrapper[26425]: I0217 15:18:13.847101 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70e43034-56d0-4fb2-8886-deb00b625686" (UID: "70e43034-56d0-4fb2-8886-deb00b625686"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:18:13.847385 master-0 kubenswrapper[26425]: I0217 15:18:13.847351 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:13.847385 master-0 kubenswrapper[26425]: I0217 15:18:13.847379 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:13.847574 master-0 kubenswrapper[26425]: I0217 15:18:13.847399 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70e43034-56d0-4fb2-8886-deb00b625686-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:13.847574 master-0 kubenswrapper[26425]: I0217 15:18:13.847443 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b6a099-f52a-428a-af09-d1842ce66891-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:13.847574 master-0 kubenswrapper[26425]: I0217 15:18:13.847485 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70e43034-56d0-4fb2-8886-deb00b625686-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:13.849392 master-0 kubenswrapper[26425]: I0217 15:18:13.849348 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3b6a099-f52a-428a-af09-d1842ce66891" (UID: "a3b6a099-f52a-428a-af09-d1842ce66891"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:18:13.949153 master-0 kubenswrapper[26425]: I0217 15:18:13.949099 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b6a099-f52a-428a-af09-d1842ce66891-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:18:14.342987 master-0 kubenswrapper[26425]: I0217 15:18:14.342945 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/1.log" Feb 17 15:18:14.343988 master-0 kubenswrapper[26425]: I0217 15:18:14.343965 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/0.log" Feb 17 15:18:14.347125 master-0 kubenswrapper[26425]: I0217 15:18:14.347103 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/kube-rbac-proxy/5.log" Feb 17 15:18:14.347774 master-0 kubenswrapper[26425]: I0217 15:18:14.347730 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/config-sync-controllers/0.log" Feb 17 15:18:14.348481 master-0 kubenswrapper[26425]: I0217 15:18:14.348433 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_14723cb7-2d96-42b7-b559-70386c4c841c/cluster-cloud-controller-manager/0.log" Feb 17 15:18:14.351122 master-0 kubenswrapper[26425]: I0217 15:18:14.351087 26425 generic.go:334] "Generic (PLEG): container finished" podID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerID="fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" exitCode=0 Feb 17 15:18:14.353141 master-0 kubenswrapper[26425]: I0217 15:18:14.353102 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_a3b6a099-f52a-428a-af09-d1842ce66891/installer/0.log" Feb 17 15:18:14.353338 master-0 kubenswrapper[26425]: I0217 15:18:14.353275 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 17 15:18:14.356718 master-0 kubenswrapper[26425]: I0217 15:18:14.356662 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xwftw_7c6b911d-8db2-48e8-bce9-d4bcde1f55a0/approver/1.log" Feb 17 15:18:14.360062 master-0 kubenswrapper[26425]: I0217 15:18:14.360009 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:18:14.363255 master-0 kubenswrapper[26425]: I0217 15:18:14.363181 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 17 15:18:14.366554 master-0 kubenswrapper[26425]: I0217 15:18:14.366488 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-hmpc7_b4422676-9a70-4973-8299-7b40a66e9c96/control-plane-machine-set-operator/0.log" Feb 17 15:18:14.370082 master-0 kubenswrapper[26425]: I0217 15:18:14.370019 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-jdfsm_68954d1e-2147-4465-9817-a3c04cbc19b0/manager/1.log" Feb 17 15:18:14.375442 master-0 kubenswrapper[26425]: I0217 15:18:14.375382 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:14.376266 master-0 kubenswrapper[26425]: I0217 15:18:14.376208 26425 scope.go:117] "RemoveContainer" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:18:15.388872 master-0 kubenswrapper[26425]: I0217 15:18:15.388822 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-f9g8s_76d3da23-3347-4a5c-b328-d92671897ecc/machine-approver-controller/0.log" Feb 17 15:18:15.389719 master-0 kubenswrapper[26425]: I0217 15:18:15.389380 26425 generic.go:334] "Generic (PLEG): container finished" podID="76d3da23-3347-4a5c-b328-d92671897ecc" containerID="cd41dc79695d9c0bd45ab8f72b3cf6af9d3af76fe51f2138f55c128fc6c09071" exitCode=255 Feb 17 15:18:15.394324 master-0 kubenswrapper[26425]: I0217 15:18:15.394284 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: I0217 15:18:16.333205 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:16.333303 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:16.334180 master-0 kubenswrapper[26425]: I0217 15:18:16.333304 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:17.454706 master-0 kubenswrapper[26425]: I0217 15:18:17.454578 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:17.454706 master-0 kubenswrapper[26425]: I0217 15:18:17.454679 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:22.413163 master-0 kubenswrapper[26425]: I0217 15:18:22.413037 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:22.413972 master-0 kubenswrapper[26425]: I0217 15:18:22.413168 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: I0217 15:18:25.340849 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:25.340977 master-0 kubenswrapper[26425]: I0217 15:18:25.340957 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:27.454620 master-0 kubenswrapper[26425]: I0217 15:18:27.454508 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:27.454620 master-0 kubenswrapper[26425]: I0217 15:18:27.454614 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:28.516776 master-0 kubenswrapper[26425]: I0217 15:18:28.516657 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933" exitCode=0 Feb 17 15:18:29.978684 master-0 kubenswrapper[26425]: E0217 15:18:29.978583 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:18:32.413014 master-0 kubenswrapper[26425]: I0217 15:18:32.412867 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:32.413014 master-0 kubenswrapper[26425]: I0217 15:18:32.412953 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:33.565245 master-0 kubenswrapper[26425]: I0217 15:18:33.565065 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:18:33.565245 master-0 kubenswrapper[26425]: I0217 15:18:33.565170 26425 generic.go:334] "Generic (PLEG): container finished" podID="7307f70e-ee5b-4f81-8155-718a02c9efe7" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" exitCode=1 Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: I0217 15:18:34.349746 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:34.349860 master-0 kubenswrapper[26425]: I0217 15:18:34.349841 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:35.581441 master-0 kubenswrapper[26425]: I0217 15:18:35.581381 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler/0.log" Feb 17 15:18:35.582038 master-0 kubenswrapper[26425]: I0217 15:18:35.581806 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387" exitCode=1 Feb 17 15:18:36.081489 master-0 kubenswrapper[26425]: I0217 15:18:36.081400 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:36.081897 master-0 kubenswrapper[26425]: I0217 15:18:36.081538 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:36.206710 master-0 kubenswrapper[26425]: I0217 15:18:36.206596 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:36.207032 master-0 kubenswrapper[26425]: I0217 15:18:36.206709 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:37.454845 master-0 kubenswrapper[26425]: I0217 15:18:37.454743 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:37.455447 master-0 kubenswrapper[26425]: I0217 15:18:37.454847 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:37.604502 master-0 kubenswrapper[26425]: I0217 15:18:37.604362 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:18:37.605737 master-0 kubenswrapper[26425]: I0217 15:18:37.605671 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:37.605904 master-0 kubenswrapper[26425]: I0217 15:18:37.605752 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c" exitCode=1 Feb 17 15:18:38.930956 master-0 kubenswrapper[26425]: I0217 15:18:38.930863 26425 scope.go:117] "RemoveContainer" containerID="c1a7bb61a118b809395aec1f33f427a3425dcd9dc3136b6302e76b1e5de619e7" Feb 17 15:18:38.991255 master-0 kubenswrapper[26425]: I0217 15:18:38.991192 26425 scope.go:117] "RemoveContainer" containerID="a532d001ee07ff8e8b23a5da938b61904c6c24e314b07a548890529a67528fab" Feb 17 15:18:39.056354 master-0 kubenswrapper[26425]: I0217 15:18:39.056289 26425 scope.go:117] "RemoveContainer" containerID="3b54e0904c922403e7243ecec6e01879618fe54346e8502751862a4c275c3a59" Feb 17 15:18:39.627431 master-0 kubenswrapper[26425]: I0217 15:18:39.627312 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-4n2ls_50c51fe2-32aa-430f-8da0-7cf3b9519131/manager/1.log" Feb 17 15:18:40.044756 master-0 kubenswrapper[26425]: E0217 15:18:40.044430 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 17 15:18:40.044756 master-0 kubenswrapper[26425]: &Event{ObjectMeta:{etcd-master-0.18951192ab0cfb7f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:9980/readyz": dial tcp 192.168.32.10:9980: connect: connection refused Feb 17 15:18:40.044756 master-0 kubenswrapper[26425]: body: Feb 17 15:18:40.044756 master-0 kubenswrapper[26425]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.072810367 +0000 UTC m=+11.964534175,LastTimestamp:2026-02-17 15:15:50.072810367 +0000 UTC m=+11.964534175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 17 15:18:40.044756 master-0 kubenswrapper[26425]: > Feb 17 15:18:42.445557 master-0 kubenswrapper[26425]: I0217 15:18:42.445486 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:42.446165 master-0 kubenswrapper[26425]: I0217 15:18:42.445585 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: I0217 15:18:43.357286 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:43.357375 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:43.358422 master-0 kubenswrapper[26425]: I0217 15:18:43.357382 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:43.668500 master-0 kubenswrapper[26425]: I0217 15:18:43.668447 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/4.log" Feb 17 15:18:43.669324 master-0 kubenswrapper[26425]: I0217 15:18:43.668993 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:18:43.669324 master-0 kubenswrapper[26425]: I0217 15:18:43.669023 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f" exitCode=1 Feb 17 15:18:45.685676 master-0 kubenswrapper[26425]: I0217 15:18:45.685587 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:18:45.687449 master-0 kubenswrapper[26425]: I0217 15:18:45.687394 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:18:45.688604 master-0 kubenswrapper[26425]: I0217 15:18:45.688558 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:45.688745 master-0 kubenswrapper[26425]: I0217 15:18:45.688613 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" exitCode=255 Feb 17 15:18:46.081678 master-0 kubenswrapper[26425]: I0217 15:18:46.081571 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:46.082001 master-0 kubenswrapper[26425]: I0217 15:18:46.081685 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:46.205710 master-0 kubenswrapper[26425]: I0217 15:18:46.205594 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:46.205710 master-0 kubenswrapper[26425]: I0217 15:18:46.205697 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:46.488213 master-0 kubenswrapper[26425]: E0217 15:18:46.488121 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 17 15:18:46.500753 master-0 kubenswrapper[26425]: E0217 15:18:46.500669 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:18:46.501191 master-0 kubenswrapper[26425]: E0217 15:18:46.501162 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 17 15:18:46.501352 master-0 kubenswrapper[26425]: I0217 15:18:46.501329 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd" Feb 17 15:18:46.501505 master-0 kubenswrapper[26425]: I0217 15:18:46.501482 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.501680 master-0 kubenswrapper[26425]: I0217 15:18:46.501656 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd" Feb 17 15:18:46.501822 master-0 kubenswrapper[26425]: I0217 15:18:46.501798 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.501958 master-0 kubenswrapper[26425]: I0217 15:18:46.501938 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:18:46.502113 master-0 kubenswrapper[26425]: I0217 15:18:46.502085 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:18:46.502239 master-0 kubenswrapper[26425]: I0217 15:18:46.502218 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.502383 master-0 kubenswrapper[26425]: I0217 15:18:46.502351 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a3b6a099-f52a-428a-af09-d1842ce66891","Type":"ContainerDied","Data":"ceb525f1242f942ba65ca3fefc2acf99f57e68a8145b1bffbd29b61c0bf59b29"} Feb 17 15:18:46.502673 master-0 kubenswrapper[26425]: I0217 15:18:46.502445 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:18:46.502790 master-0 kubenswrapper[26425]: I0217 15:18:46.502680 26425 scope.go:117] "RemoveContainer" containerID="586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c" Feb 17 15:18:46.503104 master-0 kubenswrapper[26425]: I0217 15:18:46.502638 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.503205 master-0 kubenswrapper[26425]: I0217 15:18:46.503145 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.503352 master-0 kubenswrapper[26425]: I0217 15:18:46.503316 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:18:46.519030 master-0 kubenswrapper[26425]: I0217 15:18:46.518968 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:18:46.801976 master-0 kubenswrapper[26425]: E0217 15:18:46.801900 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:18:46.979866 master-0 kubenswrapper[26425]: E0217 15:18:46.979750 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:18:47.454700 master-0 kubenswrapper[26425]: I0217 15:18:47.454601 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:47.454700 master-0 kubenswrapper[26425]: I0217 15:18:47.454684 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:18:47.714442 master-0 kubenswrapper[26425]: I0217 15:18:47.714227 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:18:47.716727 master-0 kubenswrapper[26425]: I0217 15:18:47.716667 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:18:47.718691 master-0 kubenswrapper[26425]: I0217 15:18:47.718628 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:18:47.720485 master-0 kubenswrapper[26425]: I0217 15:18:47.720380 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:18:47.721834 master-0 kubenswrapper[26425]: E0217 15:18:47.721766 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:18:47.722877 master-0 kubenswrapper[26425]: I0217 15:18:47.722802 26425 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="1c33e5c83bf19251c80a45fef1ba806877a1822bc3dfd8bb9cde774bfb9902e7" exitCode=0 Feb 17 15:18:50.193483 master-0 kubenswrapper[26425]: I0217 15:18:50.193369 26425 status_manager.go:851] "Failed to get status for pod" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c696dbdcd-t7n5b)" Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: I0217 15:18:52.366439 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:18:52.366596 master-0 kubenswrapper[26425]: I0217 15:18:52.366587 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:18:56.082414 master-0 kubenswrapper[26425]: I0217 15:18:56.082311 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:56.083503 master-0 kubenswrapper[26425]: I0217 15:18:56.082446 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:56.205782 master-0 kubenswrapper[26425]: I0217 15:18:56.205668 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:18:56.205782 master-0 kubenswrapper[26425]: I0217 15:18:56.205771 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:18:57.454795 master-0 kubenswrapper[26425]: I0217 15:18:57.454680 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:18:57.455777 master-0 kubenswrapper[26425]: I0217 15:18:57.454792 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: I0217 15:19:01.374884 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:01.374974 master-0 kubenswrapper[26425]: I0217 15:19:01.374969 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:03.981236 master-0 kubenswrapper[26425]: E0217 15:19:03.981135 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:19:06.205991 master-0 kubenswrapper[26425]: I0217 15:19:06.205744 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:19:06.205991 master-0 kubenswrapper[26425]: I0217 15:19:06.205843 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:19:07.454134 master-0 kubenswrapper[26425]: I0217 15:19:07.454058 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:19:07.455402 master-0 kubenswrapper[26425]: I0217 15:19:07.454141 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: I0217 15:19:10.383138 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:10.383247 master-0 kubenswrapper[26425]: I0217 15:19:10.383240 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:14.047548 master-0 kubenswrapper[26425]: E0217 15:19:14.047311 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18951192ab0d9806 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:50.072850438 +0000 UTC m=+11.964574256,LastTimestamp:2026-02-17 15:15:50.072850438 +0000 UTC m=+11.964574256,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:19:16.205577 master-0 kubenswrapper[26425]: I0217 15:19:16.205449 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:19:16.205577 master-0 kubenswrapper[26425]: I0217 15:19:16.205566 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:19:17.454988 master-0 kubenswrapper[26425]: I0217 15:19:17.454897 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:19:17.454988 master-0 kubenswrapper[26425]: I0217 15:19:17.454980 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: I0217 15:19:19.391828 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:19.416290 master-0 kubenswrapper[26425]: I0217 15:19:19.391976 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:20.522514 master-0 kubenswrapper[26425]: E0217 15:19:20.522349 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:19:20.523483 master-0 kubenswrapper[26425]: E0217 15:19:20.522664 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 17 15:19:20.536113 master-0 kubenswrapper[26425]: I0217 15:19:20.536042 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:19:20.982064 master-0 kubenswrapper[26425]: E0217 15:19:20.981966 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:19:26.205498 master-0 kubenswrapper[26425]: I0217 15:19:26.205391 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:19:26.206507 master-0 kubenswrapper[26425]: I0217 15:19:26.205521 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:19:27.454831 master-0 kubenswrapper[26425]: I0217 15:19:27.454735 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:19:27.454831 master-0 kubenswrapper[26425]: I0217 15:19:27.454826 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: I0217 15:19:28.399932 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:28.400034 master-0 kubenswrapper[26425]: I0217 15:19:28.400012 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:36.205565 master-0 kubenswrapper[26425]: I0217 15:19:36.205498 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:19:36.206614 master-0 kubenswrapper[26425]: I0217 15:19:36.206553 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: I0217 15:19:37.409092 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:37.409168 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:37.410689 master-0 kubenswrapper[26425]: I0217 15:19:37.409179 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:37.454888 master-0 kubenswrapper[26425]: I0217 15:19:37.454805 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:19:37.455077 master-0 kubenswrapper[26425]: I0217 15:19:37.454887 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:37.983556 master-0 kubenswrapper[26425]: E0217 15:19:37.983217 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:19:39.171336 master-0 kubenswrapper[26425]: E0217 15:19:39.171199 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:19:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:19:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:19:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:19:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:19:46.205432 master-0 kubenswrapper[26425]: I0217 15:19:46.205357 26425 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 17 15:19:46.207045 master-0 kubenswrapper[26425]: I0217 15:19:46.206992 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: I0217 15:19:46.414501 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:46.414591 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:46.415902 master-0 kubenswrapper[26425]: I0217 15:19:46.414591 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:47.454996 master-0 kubenswrapper[26425]: I0217 15:19:47.454888 26425 patch_prober.go:28] interesting pod/controller-manager-b9c8fdfbc-rh9v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Feb 17 15:19:47.455914 master-0 kubenswrapper[26425]: I0217 15:19:47.454988 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Feb 17 15:19:48.052270 master-0 kubenswrapper[26425]: E0217 15:19:48.052060 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{installer-3-master-0.189511914e926eaa openshift-kube-apiserver 12865 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:d3daf534-9a77-49c6-964f-d402c5d5a2ac,APIVersion:v1,ResourceVersion:12220,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-apiserver\"/\"kube-root-ca.crt\" not registered,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:44 +0000 UTC,LastTimestamp:2026-02-17 15:15:51.933423562 +0000 UTC m=+13.825147380,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:19:49.172860 master-0 kubenswrapper[26425]: E0217 15:19:49.172779 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:19:50.195049 master-0 kubenswrapper[26425]: I0217 15:19:50.194940 26425 status_manager.go:851] "Failed to get status for pod" podUID="fc216ba1-144a-4cc8-93db-85ab558a166a" pod="openshift-marketplace/certified-operators-2lg56" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-2lg56)" Feb 17 15:19:54.295171 master-0 kubenswrapper[26425]: I0217 15:19:54.295056 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:19:54.296299 master-0 kubenswrapper[26425]: E0217 15:19:54.295427 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:19:54.296299 master-0 kubenswrapper[26425]: E0217 15:19:54.295560 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:19:54.296299 master-0 kubenswrapper[26425]: E0217 15:19:54.295663 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:21:56.295631866 +0000 UTC m=+378.187355714 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:19:54.539649 master-0 kubenswrapper[26425]: E0217 15:19:54.539528 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:19:54.539948 master-0 kubenswrapper[26425]: E0217 15:19:54.539873 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 17 15:19:54.539948 master-0 kubenswrapper[26425]: I0217 15:19:54.539925 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:19:54.540150 master-0 kubenswrapper[26425]: I0217 15:19:54.539969 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:19:54.540150 master-0 kubenswrapper[26425]: I0217 15:19:54.539988 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:19:54.540898 master-0 kubenswrapper[26425]: I0217 15:19:54.540851 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:19:54.543421 master-0 kubenswrapper[26425]: I0217 15:19:54.543334 26425 scope.go:117] "RemoveContainer" containerID="e6582b397c9a839f2d6d03076dc105158f9bf90ad6efb080207cea9f74d8064c" Feb 17 15:19:54.545240 master-0 kubenswrapper[26425]: I0217 15:19:54.545138 26425 scope.go:117] "RemoveContainer" containerID="cd41dc79695d9c0bd45ab8f72b3cf6af9d3af76fe51f2138f55c128fc6c09071" Feb 17 15:19:54.547138 master-0 kubenswrapper[26425]: I0217 15:19:54.546952 26425 scope.go:117] "RemoveContainer" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" Feb 17 15:19:54.554811 master-0 kubenswrapper[26425]: I0217 15:19:54.554707 26425 scope.go:117] "RemoveContainer" containerID="fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" Feb 17 15:19:54.554939 master-0 kubenswrapper[26425]: I0217 15:19:54.554895 26425 scope.go:117] "RemoveContainer" containerID="d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f" Feb 17 15:19:54.555204 master-0 kubenswrapper[26425]: I0217 15:19:54.555166 26425 scope.go:117] "RemoveContainer" containerID="21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387" Feb 17 15:19:54.555292 master-0 kubenswrapper[26425]: I0217 15:19:54.555244 26425 scope.go:117] "RemoveContainer" containerID="5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933" Feb 17 15:19:54.561494 master-0 kubenswrapper[26425]: I0217 15:19:54.561431 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:19:54.984600 master-0 kubenswrapper[26425]: E0217 15:19:54.984524 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 17 15:19:55.302140 master-0 kubenswrapper[26425]: I0217 15:19:55.302052 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:19:55.306950 master-0 kubenswrapper[26425]: I0217 15:19:55.306910 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:19:55.308986 master-0 kubenswrapper[26425]: I0217 15:19:55.308957 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:19:55.315483 master-0 kubenswrapper[26425]: I0217 15:19:55.315426 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler/0.log" Feb 17 15:19:55.323768 master-0 kubenswrapper[26425]: I0217 15:19:55.323732 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-f9g8s_76d3da23-3347-4a5c-b328-d92671897ecc/machine-approver-controller/0.log" Feb 17 15:19:55.335014 master-0 kubenswrapper[26425]: I0217 15:19:55.334974 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:19:55.340782 master-0 kubenswrapper[26425]: I0217 15:19:55.340753 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/4.log" Feb 17 15:19:55.342333 master-0 kubenswrapper[26425]: I0217 15:19:55.342296 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: I0217 15:19:55.421973 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:19:55.422132 master-0 kubenswrapper[26425]: I0217 15:19:55.422055 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:19:58.789181 master-0 kubenswrapper[26425]: I0217 15:19:58.789068 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:19:58.790045 master-0 kubenswrapper[26425]: I0217 15:19:58.789157 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:19:59.173851 master-0 kubenswrapper[26425]: E0217 15:19:59.173775 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: I0217 15:20:04.430864 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:20:04.430979 master-0 kubenswrapper[26425]: I0217 15:20:04.430944 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:20:08.484335 master-0 kubenswrapper[26425]: I0217 15:20:08.484260 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/1.log" Feb 17 15:20:08.484866 master-0 kubenswrapper[26425]: I0217 15:20:08.484343 26425 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a" exitCode=0 Feb 17 15:20:08.789131 master-0 kubenswrapper[26425]: I0217 15:20:08.789008 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:20:08.789131 master-0 kubenswrapper[26425]: I0217 15:20:08.789110 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:20:09.175007 master-0 kubenswrapper[26425]: E0217 15:20:09.174872 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:20:10.510657 master-0 kubenswrapper[26425]: I0217 15:20:10.510567 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/2.log" Feb 17 15:20:10.510657 master-0 kubenswrapper[26425]: I0217 15:20:10.510643 26425 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e" exitCode=0 Feb 17 15:20:11.524678 master-0 kubenswrapper[26425]: I0217 15:20:11.524597 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/2.log" Feb 17 15:20:11.525259 master-0 kubenswrapper[26425]: I0217 15:20:11.524698 26425 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48" exitCode=0 Feb 17 15:20:11.986268 master-0 kubenswrapper[26425]: E0217 15:20:11.986130 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: I0217 15:20:13.436689 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:20:13.436832 master-0 kubenswrapper[26425]: I0217 15:20:13.436780 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:20:18.789037 master-0 kubenswrapper[26425]: I0217 15:20:18.788886 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:20:18.790150 master-0 kubenswrapper[26425]: I0217 15:20:18.789082 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:20:19.175286 master-0 kubenswrapper[26425]: E0217 15:20:19.175170 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:20:19.175286 master-0 kubenswrapper[26425]: E0217 15:20:19.175223 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:20:22.055901 master-0 kubenswrapper[26425]: E0217 15:20:22.055633 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{installer-3-master-0.189511914e926eaa openshift-kube-apiserver 12865 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:d3daf534-9a77-49c6-964f-d402c5d5a2ac,APIVersion:v1,ResourceVersion:12220,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-apiserver\"/\"kube-root-ca.crt\" not registered,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:15:44 +0000 UTC,LastTimestamp:2026-02-17 15:15:59.956755813 +0000 UTC m=+21.848479661,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: I0217 15:20:22.443889 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:20:22.444063 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:20:22.444650 master-0 kubenswrapper[26425]: I0217 15:20:22.444045 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:20:25.646736 master-0 kubenswrapper[26425]: I0217 15:20:25.646641 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:20:25.647746 master-0 kubenswrapper[26425]: I0217 15:20:25.647643 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:20:25.649389 master-0 kubenswrapper[26425]: I0217 15:20:25.649339 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:20:25.650581 master-0 kubenswrapper[26425]: I0217 15:20:25.650535 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:20:25.650673 master-0 kubenswrapper[26425]: I0217 15:20:25.650597 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" exitCode=255 Feb 17 15:20:25.653274 master-0 kubenswrapper[26425]: I0217 15:20:25.653216 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/5.log" Feb 17 15:20:25.653885 master-0 kubenswrapper[26425]: I0217 15:20:25.653828 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/4.log" Feb 17 15:20:25.654939 master-0 kubenswrapper[26425]: I0217 15:20:25.654879 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:20:25.655030 master-0 kubenswrapper[26425]: I0217 15:20:25.654973 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694" exitCode=1 Feb 17 15:20:27.675304 master-0 kubenswrapper[26425]: I0217 15:20:27.675196 26425 generic.go:334] "Generic (PLEG): container finished" podID="ba1306f7-029b-4d43-ba3c-5738da9148d6" containerID="4ca2a1481cf68af809d23ae9ad2e79b63336d3be01516204a6730a744e080f72" exitCode=0 Feb 17 15:20:28.566226 master-0 kubenswrapper[26425]: E0217 15:20:28.565719 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:20:28.566226 master-0 kubenswrapper[26425]: E0217 15:20:28.565991 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.025s" Feb 17 15:20:28.566226 master-0 kubenswrapper[26425]: I0217 15:20:28.566034 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" containerID="cri-o://2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608" Feb 17 15:20:28.566226 master-0 kubenswrapper[26425]: I0217 15:20:28.566052 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:20:28.566226 master-0 kubenswrapper[26425]: I0217 15:20:28.566120 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:20:28.581329 master-0 kubenswrapper[26425]: I0217 15:20:28.581224 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:20:28.690865 master-0 kubenswrapper[26425]: I0217 15:20:28.690417 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-6dzpr_c8646e5c-c2ce-48e6-b757-58044769f479/cluster-autoscaler-operator/0.log" Feb 17 15:20:28.692165 master-0 kubenswrapper[26425]: I0217 15:20:28.691223 26425 generic.go:334] "Generic (PLEG): container finished" podID="c8646e5c-c2ce-48e6-b757-58044769f479" containerID="da1858700d4dd348bd1bd6965ebad759d727564f2555dd6372efe783d1762809" exitCode=255 Feb 17 15:20:28.934010 master-0 kubenswrapper[26425]: I0217 15:20:28.933906 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:28.934010 master-0 kubenswrapper[26425]: I0217 15:20:28.933996 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:28.988207 master-0 kubenswrapper[26425]: E0217 15:20:28.988062 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:20:29.702250 master-0 kubenswrapper[26425]: I0217 15:20:29.702191 26425 generic.go:334] "Generic (PLEG): container finished" podID="187af679-a062-4f41-81f2-33545f76febf" containerID="bfa4241e9cbb9bb3dc9c0b9ecf26410125b91a6e764bdf4080c3457126bf7fdc" exitCode=0 Feb 17 15:20:29.712509 master-0 kubenswrapper[26425]: I0217 15:20:29.708817 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/1.log" Feb 17 15:20:29.721583 master-0 kubenswrapper[26425]: I0217 15:20:29.721504 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/0.log" Feb 17 15:20:29.721840 master-0 kubenswrapper[26425]: I0217 15:20:29.721605 26425 generic.go:334] "Generic (PLEG): container finished" podID="071566ae-a9ae-4aa9-9dc3-38602363be72" containerID="4c47c374b75591c1874c057cb8609aad6e1b60685643b76979aadb8e2ca53712" exitCode=1 Feb 17 15:20:29.726244 master-0 kubenswrapper[26425]: I0217 15:20:29.726153 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/1.log" Feb 17 15:20:29.728925 master-0 kubenswrapper[26425]: I0217 15:20:29.728818 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/0.log" Feb 17 15:20:29.732057 master-0 kubenswrapper[26425]: I0217 15:20:29.731708 26425 generic.go:334] "Generic (PLEG): container finished" podID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerID="b86a492f597b80e76da870edbd5aa60b116fd208f8fcff47303644a8e0039f9b" exitCode=1 Feb 17 15:20:29.739626 master-0 kubenswrapper[26425]: I0217 15:20:29.739580 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/2.log" Feb 17 15:20:29.739694 master-0 kubenswrapper[26425]: I0217 15:20:29.739649 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" exitCode=0 Feb 17 15:20:30.680442 master-0 kubenswrapper[26425]: I0217 15:20:30.680356 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:30.680990 master-0 kubenswrapper[26425]: I0217 15:20:30.680444 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:32.245238 master-0 kubenswrapper[26425]: I0217 15:20:32.245181 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:32.245837 master-0 kubenswrapper[26425]: I0217 15:20:32.245267 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:32.768898 master-0 kubenswrapper[26425]: I0217 15:20:32.768824 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/2.log" Feb 17 15:20:32.769185 master-0 kubenswrapper[26425]: I0217 15:20:32.768898 26425 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a" exitCode=0 Feb 17 15:20:34.304381 master-0 kubenswrapper[26425]: I0217 15:20:34.304275 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:34.305354 master-0 kubenswrapper[26425]: I0217 15:20:34.304394 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:36.813177 master-0 kubenswrapper[26425]: I0217 15:20:36.813020 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-g6fgz_655e4000-0ad4-4349-8c31-e0c952e4be30/machine-api-operator/0.log" Feb 17 15:20:36.814052 master-0 kubenswrapper[26425]: I0217 15:20:36.813595 26425 generic.go:334] "Generic (PLEG): container finished" podID="655e4000-0ad4-4349-8c31-e0c952e4be30" containerID="a17a8feb8cde32d9f769f1d063cb256b0434b87c2646d32dfbbaf8c558e68235" exitCode=255 Feb 17 15:20:38.349948 master-0 kubenswrapper[26425]: I0217 15:20:38.349837 26425 kubelet.go:1505] "Image garbage collection succeeded" Feb 17 15:20:38.933503 master-0 kubenswrapper[26425]: I0217 15:20:38.933386 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:38.933818 master-0 kubenswrapper[26425]: I0217 15:20:38.933543 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:39.131080 master-0 kubenswrapper[26425]: I0217 15:20:39.130965 26425 scope.go:117] "RemoveContainer" containerID="76d6fd0b45765a0b596669cf9b7b85cd807449a57c73b14e34163f91a2995908" Feb 17 15:20:39.176584 master-0 kubenswrapper[26425]: I0217 15:20:39.176530 26425 scope.go:117] "RemoveContainer" containerID="398a6ec9ab16d8c9b51a94b166012be81bd6e66e2c357cd186d8526d7f9bb69c" Feb 17 15:20:39.214825 master-0 kubenswrapper[26425]: I0217 15:20:39.214771 26425 scope.go:117] "RemoveContainer" containerID="533491bcdd7a1e81be78b60edc3ff96d870551db82df44a567112342369f625f" Feb 17 15:20:39.261324 master-0 kubenswrapper[26425]: I0217 15:20:39.261271 26425 scope.go:117] "RemoveContainer" containerID="81aaf4a8e92ad8167ce2d8a4500268568ecd4d12b11466d397ae290644672b32" Feb 17 15:20:39.319616 master-0 kubenswrapper[26425]: I0217 15:20:39.319557 26425 scope.go:117] "RemoveContainer" containerID="8a4a98b1318c509e5f82636085aeb117a7034201fd28d56b542c5883530a6144" Feb 17 15:20:39.368753 master-0 kubenswrapper[26425]: I0217 15:20:39.368673 26425 scope.go:117] "RemoveContainer" containerID="47a0663eadceb8ac2b92b936021f5bf1e155eb2c91b070318a1766570bc56359" Feb 17 15:20:39.441992 master-0 kubenswrapper[26425]: I0217 15:20:39.441930 26425 scope.go:117] "RemoveContainer" containerID="29887de882fd8a3a22e87156cef67aeb00ac494c3b04550882c5426a5a9c25ec" Feb 17 15:20:39.469059 master-0 kubenswrapper[26425]: I0217 15:20:39.468986 26425 scope.go:117] "RemoveContainer" containerID="8058b275e263538c079da0d8c430b578e1243d25628fc693b056f6c40e1434b1" Feb 17 15:20:39.842411 master-0 kubenswrapper[26425]: I0217 15:20:39.842139 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/1.log" Feb 17 15:20:39.850312 master-0 kubenswrapper[26425]: I0217 15:20:39.850281 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/1.log" Feb 17 15:20:40.681433 master-0 kubenswrapper[26425]: I0217 15:20:40.681307 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:40.682329 master-0 kubenswrapper[26425]: I0217 15:20:40.681423 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:42.244572 master-0 kubenswrapper[26425]: I0217 15:20:42.244509 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:42.245142 master-0 kubenswrapper[26425]: I0217 15:20:42.244602 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:44.304484 master-0 kubenswrapper[26425]: I0217 15:20:44.304301 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:44.304484 master-0 kubenswrapper[26425]: I0217 15:20:44.304403 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:45.990311 master-0 kubenswrapper[26425]: E0217 15:20:45.990004 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:20:48.933858 master-0 kubenswrapper[26425]: I0217 15:20:48.933800 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:48.934943 master-0 kubenswrapper[26425]: I0217 15:20:48.934725 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:49.944502 master-0 kubenswrapper[26425]: I0217 15:20:49.944395 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:20:49.944502 master-0 kubenswrapper[26425]: I0217 15:20:49.944501 26425 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5" exitCode=0 Feb 17 15:20:50.196948 master-0 kubenswrapper[26425]: I0217 15:20:50.196853 26425 status_manager.go:851] "Failed to get status for pod" podUID="c33efa80-fbeb-438a-86e3-d22d7c12d3e9" pod="openshift-marketplace/community-operators-t8vtc" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods community-operators-t8vtc)" Feb 17 15:20:50.680887 master-0 kubenswrapper[26425]: I0217 15:20:50.680768 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:50.681210 master-0 kubenswrapper[26425]: I0217 15:20:50.680896 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:52.006188 master-0 kubenswrapper[26425]: I0217 15:20:52.006143 26425 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-pjm6n container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 17 15:20:52.007137 master-0 kubenswrapper[26425]: I0217 15:20:52.007092 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 17 15:20:52.245476 master-0 kubenswrapper[26425]: I0217 15:20:52.245377 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:52.245704 master-0 kubenswrapper[26425]: I0217 15:20:52.245488 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:54.304494 master-0 kubenswrapper[26425]: I0217 15:20:54.304368 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:20:54.305356 master-0 kubenswrapper[26425]: I0217 15:20:54.304533 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:20:55.998499 master-0 kubenswrapper[26425]: I0217 15:20:55.998405 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/1.log" Feb 17 15:20:55.999696 master-0 kubenswrapper[26425]: I0217 15:20:55.999671 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:20:55.999822 master-0 kubenswrapper[26425]: I0217 15:20:55.999707 26425 generic.go:334] "Generic (PLEG): container finished" podID="7307f70e-ee5b-4f81-8155-718a02c9efe7" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" exitCode=1 Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: E0217 15:20:56.058884 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: &Event{ObjectMeta:{apiserver-865765995-c58rq.189511952418a1bf openshift-oauth-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-oauth-apiserver,Name:apiserver-865765995-c58rq,UID:124ba199-b79a-4e5c-8512-cc0ae50f73c8,APIVersion:v1,ResourceVersion:7699,FieldPath:spec.containers{oauth-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: body: [+]ping ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:00.693551551 +0000 UTC m=+22.585275429,LastTimestamp:2026-02-17 15:16:00.693551551 +0000 UTC m=+22.585275429,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 17 15:20:56.059044 master-0 kubenswrapper[26425]: > Feb 17 15:20:58.933646 master-0 kubenswrapper[26425]: I0217 15:20:58.933556 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:20:58.933646 master-0 kubenswrapper[26425]: I0217 15:20:58.933638 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:20:59.838093 master-0 kubenswrapper[26425]: E0217 15:20:59.837984 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:20:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:20:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:20:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:20:49Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:21:02.245133 master-0 kubenswrapper[26425]: I0217 15:21:02.245055 26425 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-t7n5b container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" start-of-body= Feb 17 15:21:02.246100 master-0 kubenswrapper[26425]: I0217 15:21:02.245154 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" podUID="33e819b0-5a3f-4c2d-9dc7-8b0231804cdb" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.15:8080/healthz\": dial tcp 10.128.0.15:8080: connect: connection refused" Feb 17 15:21:02.585124 master-0 kubenswrapper[26425]: E0217 15:21:02.584944 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:21:02.585370 master-0 kubenswrapper[26425]: E0217 15:21:02.585242 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 17 15:21:02.585447 master-0 kubenswrapper[26425]: I0217 15:21:02.585401 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:21:02.586505 master-0 kubenswrapper[26425]: I0217 15:21:02.586401 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:21:02.586505 master-0 kubenswrapper[26425]: I0217 15:21:02.586497 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:21:02.586860 master-0 kubenswrapper[26425]: I0217 15:21:02.586799 26425 scope.go:117] "RemoveContainer" containerID="b86a492f597b80e76da870edbd5aa60b116fd208f8fcff47303644a8e0039f9b" Feb 17 15:21:02.586938 master-0 kubenswrapper[26425]: I0217 15:21:02.586878 26425 scope.go:117] "RemoveContainer" containerID="50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e" Feb 17 15:21:02.587342 master-0 kubenswrapper[26425]: I0217 15:21:02.587275 26425 scope.go:117] "RemoveContainer" containerID="bfa4241e9cbb9bb3dc9c0b9ecf26410125b91a6e764bdf4080c3457126bf7fdc" Feb 17 15:21:02.587994 master-0 kubenswrapper[26425]: I0217 15:21:02.587940 26425 scope.go:117] "RemoveContainer" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:21:02.588803 master-0 kubenswrapper[26425]: I0217 15:21:02.588756 26425 scope.go:117] "RemoveContainer" containerID="1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a" Feb 17 15:21:02.588917 master-0 kubenswrapper[26425]: I0217 15:21:02.588876 26425 scope.go:117] "RemoveContainer" containerID="4c47c374b75591c1874c057cb8609aad6e1b60685643b76979aadb8e2ca53712" Feb 17 15:21:02.589190 master-0 kubenswrapper[26425]: I0217 15:21:02.589131 26425 scope.go:117] "RemoveContainer" containerID="afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48" Feb 17 15:21:02.603278 master-0 kubenswrapper[26425]: I0217 15:21:02.603192 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:21:02.990960 master-0 kubenswrapper[26425]: E0217 15:21:02.990888 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:21:03.058426 master-0 kubenswrapper[26425]: I0217 15:21:03.058366 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-t7n5b_33e819b0-5a3f-4c2d-9dc7-8b0231804cdb/package-server-manager/1.log" Feb 17 15:21:03.060821 master-0 kubenswrapper[26425]: I0217 15:21:03.060796 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:21:03.061549 master-0 kubenswrapper[26425]: I0217 15:21:03.061532 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:21:03.063131 master-0 kubenswrapper[26425]: I0217 15:21:03.063085 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:21:03.064286 master-0 kubenswrapper[26425]: I0217 15:21:03.064245 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:21:03.068489 master-0 kubenswrapper[26425]: I0217 15:21:03.068418 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-k8xp8_071566ae-a9ae-4aa9-9dc3-38602363be72/cluster-node-tuning-operator/1.log" Feb 17 15:21:04.093149 master-0 kubenswrapper[26425]: I0217 15:21:04.093084 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:21:04.095502 master-0 kubenswrapper[26425]: I0217 15:21:04.095430 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:21:04.097723 master-0 kubenswrapper[26425]: I0217 15:21:04.097674 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:21:04.099320 master-0 kubenswrapper[26425]: I0217 15:21:04.099260 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:21:08.934250 master-0 kubenswrapper[26425]: I0217 15:21:08.934155 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:21:08.934985 master-0 kubenswrapper[26425]: I0217 15:21:08.934253 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:21:09.838736 master-0 kubenswrapper[26425]: E0217 15:21:09.838634 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:21:14.189664 master-0 kubenswrapper[26425]: I0217 15:21:14.189572 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/1.log" Feb 17 15:21:14.190550 master-0 kubenswrapper[26425]: I0217 15:21:14.189673 26425 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" containerID="3d42744bc55ffdd0ef5a58be1827ed2cd005681379705cfa9b05d7d0639649ee" exitCode=0 Feb 17 15:21:16.211715 master-0 kubenswrapper[26425]: I0217 15:21:16.211634 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a" exitCode=0 Feb 17 15:21:16.214420 master-0 kubenswrapper[26425]: I0217 15:21:16.214357 26425 generic.go:334] "Generic (PLEG): container finished" podID="da06cfcb-7c78-4022-96b1-d858853f5adc" containerID="d6df48814b566ca92cfa0739d561cf9daa945b55707b972a933430e336c6c185" exitCode=0 Feb 17 15:21:17.227992 master-0 kubenswrapper[26425]: I0217 15:21:17.227814 26425 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="590e8fe24ffb416ddbf90918b458930e7fec94c62687bb9e8c21a6053d7a588b" exitCode=0 Feb 17 15:21:17.231605 master-0 kubenswrapper[26425]: I0217 15:21:17.231547 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:21:17.231779 master-0 kubenswrapper[26425]: I0217 15:21:17.231612 26425 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036" exitCode=0 Feb 17 15:21:18.245062 master-0 kubenswrapper[26425]: I0217 15:21:18.244983 26425 generic.go:334] "Generic (PLEG): container finished" podID="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" containerID="61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3" exitCode=0 Feb 17 15:21:18.248330 master-0 kubenswrapper[26425]: I0217 15:21:18.248225 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" exitCode=0 Feb 17 15:21:18.251183 master-0 kubenswrapper[26425]: I0217 15:21:18.251125 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:21:18.251380 master-0 kubenswrapper[26425]: I0217 15:21:18.251191 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0" exitCode=0 Feb 17 15:21:18.933331 master-0 kubenswrapper[26425]: I0217 15:21:18.933276 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:21:18.933792 master-0 kubenswrapper[26425]: I0217 15:21:18.933732 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:21:19.176295 master-0 kubenswrapper[26425]: I0217 15:21:19.176214 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:19.176784 master-0 kubenswrapper[26425]: I0217 15:21:19.176292 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:19.263276 master-0 kubenswrapper[26425]: I0217 15:21:19.263171 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:21:19.263276 master-0 kubenswrapper[26425]: I0217 15:21:19.263261 26425 generic.go:334] "Generic (PLEG): container finished" podID="2b167b7b-2280-4c82-ac78-71c57aebe503" containerID="dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa" exitCode=0 Feb 17 15:21:19.406055 master-0 kubenswrapper[26425]: I0217 15:21:19.405719 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:19.406055 master-0 kubenswrapper[26425]: I0217 15:21:19.405854 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:19.839491 master-0 kubenswrapper[26425]: E0217 15:21:19.839383 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:21:19.992811 master-0 kubenswrapper[26425]: E0217 15:21:19.992676 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:21:21.285217 master-0 kubenswrapper[26425]: I0217 15:21:21.285148 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:21:21.285822 master-0 kubenswrapper[26425]: I0217 15:21:21.285229 26425 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470" exitCode=0 Feb 17 15:21:22.175872 master-0 kubenswrapper[26425]: I0217 15:21:22.175759 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:22.175872 master-0 kubenswrapper[26425]: I0217 15:21:22.175859 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:22.403218 master-0 kubenswrapper[26425]: I0217 15:21:22.403040 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:22.404059 master-0 kubenswrapper[26425]: I0217 15:21:22.403241 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:23.247451 master-0 kubenswrapper[26425]: I0217 15:21:23.247371 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:21:23.247451 master-0 kubenswrapper[26425]: I0217 15:21:23.247439 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:21:23.304625 master-0 kubenswrapper[26425]: I0217 15:21:23.304557 26425 generic.go:334] "Generic (PLEG): container finished" podID="ad81b5bd-2f97-4e7e-a12b-746998fa59f2" containerID="1ac9a237c052e7fcf84aea4376a51f8bc274e44722f869b5fc32cf99dd2e4eac" exitCode=0 Feb 17 15:21:24.316499 master-0 kubenswrapper[26425]: I0217 15:21:24.316261 26425 generic.go:334] "Generic (PLEG): container finished" podID="801742a6-3735-4883-9676-e852dc4173d2" containerID="397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e" exitCode=0 Feb 17 15:21:25.176551 master-0 kubenswrapper[26425]: I0217 15:21:25.176305 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:25.176551 master-0 kubenswrapper[26425]: I0217 15:21:25.176432 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:25.403601 master-0 kubenswrapper[26425]: I0217 15:21:25.403451 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:25.403601 master-0 kubenswrapper[26425]: I0217 15:21:25.403587 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:28.176184 master-0 kubenswrapper[26425]: I0217 15:21:28.176080 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:28.176184 master-0 kubenswrapper[26425]: I0217 15:21:28.176168 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:28.934441 master-0 kubenswrapper[26425]: I0217 15:21:28.934349 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Feb 17 15:21:28.934441 master-0 kubenswrapper[26425]: I0217 15:21:28.934429 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Feb 17 15:21:29.840980 master-0 kubenswrapper[26425]: E0217 15:21:29.840852 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Feb 17 15:21:30.061580 master-0 kubenswrapper[26425]: E0217 15:21:30.061348 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{apiserver-865765995-c58rq.18951195241ab8a5 openshift-oauth-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-oauth-apiserver,Name:apiserver-865765995-c58rq,UID:124ba199-b79a-4e5c-8512-cc0ae50f73c8,APIVersion:v1,ResourceVersion:7699,FieldPath:spec.containers{oauth-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:00.693688485 +0000 UTC m=+22.585412333,LastTimestamp:2026-02-17 15:16:00.693688485 +0000 UTC m=+22.585412333,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:21:31.175840 master-0 kubenswrapper[26425]: I0217 15:21:31.175727 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:31.175840 master-0 kubenswrapper[26425]: I0217 15:21:31.175821 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:33.247631 master-0 kubenswrapper[26425]: I0217 15:21:33.247510 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:21:33.247631 master-0 kubenswrapper[26425]: I0217 15:21:33.247603 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:21:34.176130 master-0 kubenswrapper[26425]: I0217 15:21:34.176001 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:21:34.176130 master-0 kubenswrapper[26425]: I0217 15:21:34.176110 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:21:34.406027 master-0 kubenswrapper[26425]: I0217 15:21:34.405957 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/3.log" Feb 17 15:21:34.406801 master-0 kubenswrapper[26425]: I0217 15:21:34.406735 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:21:34.408227 master-0 kubenswrapper[26425]: I0217 15:21:34.408183 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:21:34.410678 master-0 kubenswrapper[26425]: I0217 15:21:34.410622 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:21:34.412071 master-0 kubenswrapper[26425]: I0217 15:21:34.412041 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:21:34.412165 master-0 kubenswrapper[26425]: I0217 15:21:34.412079 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" exitCode=255 Feb 17 15:21:36.590061 master-0 kubenswrapper[26425]: E0217 15:21:36.590008 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 17 15:21:36.605866 master-0 kubenswrapper[26425]: E0217 15:21:36.605807 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:21:36.606152 master-0 kubenswrapper[26425]: E0217 15:21:36.606108 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Feb 17 15:21:36.606231 master-0 kubenswrapper[26425]: I0217 15:21:36.606173 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:21:36.606231 master-0 kubenswrapper[26425]: I0217 15:21:36.606210 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:21:36.606231 master-0 kubenswrapper[26425]: I0217 15:21:36.606228 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:21:36.606555 master-0 kubenswrapper[26425]: I0217 15:21:36.606508 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:21:36.606555 master-0 kubenswrapper[26425]: I0217 15:21:36.606549 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:21:36.606698 master-0 kubenswrapper[26425]: I0217 15:21:36.606566 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:21:36.610546 master-0 kubenswrapper[26425]: I0217 15:21:36.610433 26425 scope.go:117] "RemoveContainer" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" Feb 17 15:21:36.611574 master-0 kubenswrapper[26425]: I0217 15:21:36.611536 26425 scope.go:117] "RemoveContainer" containerID="e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036" Feb 17 15:21:36.612364 master-0 kubenswrapper[26425]: I0217 15:21:36.612216 26425 scope.go:117] "RemoveContainer" containerID="88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a" Feb 17 15:21:36.614608 master-0 kubenswrapper[26425]: I0217 15:21:36.614577 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:21:36.616916 master-0 kubenswrapper[26425]: I0217 15:21:36.616871 26425 scope.go:117] "RemoveContainer" containerID="8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" Feb 17 15:21:36.617128 master-0 kubenswrapper[26425]: I0217 15:21:36.617033 26425 scope.go:117] "RemoveContainer" containerID="8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5" Feb 17 15:21:36.619833 master-0 kubenswrapper[26425]: I0217 15:21:36.617600 26425 scope.go:117] "RemoveContainer" containerID="d6df48814b566ca92cfa0739d561cf9daa945b55707b972a933430e336c6c185" Feb 17 15:21:36.619833 master-0 kubenswrapper[26425]: I0217 15:21:36.617800 26425 scope.go:117] "RemoveContainer" containerID="532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" Feb 17 15:21:36.619833 master-0 kubenswrapper[26425]: I0217 15:21:36.618298 26425 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-apiserver" containerStatusID={"Type":"cri-o","ID":"da09e4a5b3dba77dbd04689a11e6d73f307ccd2ac6de0aff2e732163788d68b5"} pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" containerMessage="Container oauth-apiserver failed startup probe, will be restarted" Feb 17 15:21:36.619833 master-0 kubenswrapper[26425]: I0217 15:21:36.618368 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" containerID="cri-o://da09e4a5b3dba77dbd04689a11e6d73f307ccd2ac6de0aff2e732163788d68b5" gracePeriod=120 Feb 17 15:21:36.619833 master-0 kubenswrapper[26425]: I0217 15:21:36.619720 26425 scope.go:117] "RemoveContainer" containerID="397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e" Feb 17 15:21:36.620918 master-0 kubenswrapper[26425]: E0217 15:21:36.620011 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:21:36.620918 master-0 kubenswrapper[26425]: I0217 15:21:36.620888 26425 scope.go:117] "RemoveContainer" containerID="dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa" Feb 17 15:21:36.621552 master-0 kubenswrapper[26425]: I0217 15:21:36.621513 26425 scope.go:117] "RemoveContainer" containerID="da1858700d4dd348bd1bd6965ebad759d727564f2555dd6372efe783d1762809" Feb 17 15:21:36.621909 master-0 kubenswrapper[26425]: I0217 15:21:36.621827 26425 scope.go:117] "RemoveContainer" containerID="a17a8feb8cde32d9f769f1d063cb256b0434b87c2646d32dfbbaf8c558e68235" Feb 17 15:21:36.621983 master-0 kubenswrapper[26425]: I0217 15:21:36.621943 26425 scope.go:117] "RemoveContainer" containerID="61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3" Feb 17 15:21:36.622511 master-0 kubenswrapper[26425]: I0217 15:21:36.621619 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:21:36.623176 master-0 kubenswrapper[26425]: I0217 15:21:36.623130 26425 scope.go:117] "RemoveContainer" containerID="4ca2a1481cf68af809d23ae9ad2e79b63336d3be01516204a6730a744e080f72" Feb 17 15:21:36.623678 master-0 kubenswrapper[26425]: I0217 15:21:36.623338 26425 scope.go:117] "RemoveContainer" containerID="6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694" Feb 17 15:21:36.624145 master-0 kubenswrapper[26425]: I0217 15:21:36.623914 26425 scope.go:117] "RemoveContainer" containerID="e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0" Feb 17 15:21:36.625000 master-0 kubenswrapper[26425]: I0217 15:21:36.624413 26425 scope.go:117] "RemoveContainer" containerID="0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470" Feb 17 15:21:36.625478 master-0 kubenswrapper[26425]: I0217 15:21:36.625227 26425 scope.go:117] "RemoveContainer" containerID="1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a" Feb 17 15:21:36.993999 master-0 kubenswrapper[26425]: E0217 15:21:36.993855 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:21:37.443808 master-0 kubenswrapper[26425]: I0217 15:21:37.443773 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/2.log" Feb 17 15:21:37.452929 master-0 kubenswrapper[26425]: I0217 15:21:37.452891 26425 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="a8988cec11fd110131ab62b289c0ff6085ef1250cc85630f2ae1bdbdb0bbfda2" exitCode=0 Feb 17 15:21:37.457567 master-0 kubenswrapper[26425]: I0217 15:21:37.455600 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/1.log" Feb 17 15:21:37.457567 master-0 kubenswrapper[26425]: I0217 15:21:37.456546 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:21:37.461950 master-0 kubenswrapper[26425]: I0217 15:21:37.461899 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-6dzpr_c8646e5c-c2ce-48e6-b757-58044769f479/cluster-autoscaler-operator/0.log" Feb 17 15:21:37.472519 master-0 kubenswrapper[26425]: I0217 15:21:37.472490 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/2.log" Feb 17 15:21:37.478742 master-0 kubenswrapper[26425]: I0217 15:21:37.478717 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/5.log" Feb 17 15:21:37.479430 master-0 kubenswrapper[26425]: I0217 15:21:37.479395 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/4.log" Feb 17 15:21:37.480869 master-0 kubenswrapper[26425]: I0217 15:21:37.480806 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:21:37.492581 master-0 kubenswrapper[26425]: I0217 15:21:37.492521 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-g6fgz_655e4000-0ad4-4349-8c31-e0c952e4be30/machine-api-operator/0.log" Feb 17 15:21:37.506517 master-0 kubenswrapper[26425]: I0217 15:21:37.505139 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/2.log" Feb 17 15:21:38.530153 master-0 kubenswrapper[26425]: I0217 15:21:38.530082 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/2.log" Feb 17 15:21:38.534383 master-0 kubenswrapper[26425]: I0217 15:21:38.534310 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/1.log" Feb 17 15:21:38.537827 master-0 kubenswrapper[26425]: I0217 15:21:38.537716 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-g6fgz_655e4000-0ad4-4349-8c31-e0c952e4be30/machine-api-operator/0.log" Feb 17 15:21:39.518067 master-0 kubenswrapper[26425]: I0217 15:21:39.517962 26425 scope.go:117] "RemoveContainer" containerID="b67b9db47d025278eedfe7f04574ddab8f98126aef0c22b6f402dd2396b510a8" Feb 17 15:21:39.571808 master-0 kubenswrapper[26425]: I0217 15:21:39.571747 26425 scope.go:117] "RemoveContainer" containerID="acb11f90f31b36431471e58a5606b8c3af358cc8197512729e33f3481e310e60" Feb 17 15:21:39.633893 master-0 kubenswrapper[26425]: I0217 15:21:39.633827 26425 scope.go:117] "RemoveContainer" containerID="db0dcecfe2a042268864f0d7f4d56cbdc089e71bde33d4f68886ce775e3eeb52" Feb 17 15:21:39.679011 master-0 kubenswrapper[26425]: I0217 15:21:39.678959 26425 scope.go:117] "RemoveContainer" containerID="477671fff24fa6c32a024908ab3cc22818f79df79458186eb17cd6a91eb44b4f" Feb 17 15:21:39.721835 master-0 kubenswrapper[26425]: I0217 15:21:39.721773 26425 scope.go:117] "RemoveContainer" containerID="2e491cb15463a078f03468285bf55e7f054cca1c528834a6f29b9effbdeb75f4" Feb 17 15:21:39.753486 master-0 kubenswrapper[26425]: I0217 15:21:39.753399 26425 scope.go:117] "RemoveContainer" containerID="c37b7a8b6b89d90619e0434b3f19d1c552551ee3029bb3ef42107c3c450c9cb1" Feb 17 15:21:39.813989 master-0 kubenswrapper[26425]: I0217 15:21:39.813911 26425 scope.go:117] "RemoveContainer" containerID="13fd27ae7e51b2ce5e96bcf2c8231506a7b48822721ae68c680d8a96bd1e5103" Feb 17 15:21:39.841304 master-0 kubenswrapper[26425]: E0217 15:21:39.841211 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:21:39.841304 master-0 kubenswrapper[26425]: E0217 15:21:39.841287 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:21:39.855243 master-0 kubenswrapper[26425]: I0217 15:21:39.855175 26425 scope.go:117] "RemoveContainer" containerID="0782c7f0d5ddfa48d6cd6d3f38b88b85eb9375711ddb12c97f5638b11c8924d5" Feb 17 15:21:39.889593 master-0 kubenswrapper[26425]: I0217 15:21:39.889542 26425 scope.go:117] "RemoveContainer" containerID="f39a2941da8acf9c022d9ee8fee7bd53fe9f2ec2201845d6f776f31736d87bf2" Feb 17 15:21:39.921132 master-0 kubenswrapper[26425]: I0217 15:21:39.921082 26425 scope.go:117] "RemoveContainer" containerID="6d9a92eb2e644f956d98f7c0c8da65baf4f27d9eba13c8c64b77e173d1e323c4" Feb 17 15:21:50.199236 master-0 kubenswrapper[26425]: I0217 15:21:50.199115 26425 status_manager.go:851] "Failed to get status for pod" podUID="7c393109-8c98-4a73-be1a-608038e5d094" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods metrics-server-f94977f65-sgf5z)" Feb 17 15:21:52.692171 master-0 kubenswrapper[26425]: I0217 15:21:52.691999 26425 generic.go:334] "Generic (PLEG): container finished" podID="626c4f7a-59ee-45da-9198-05dd2c42ac42" containerID="98474fa2fe73c4db5804824208857baff7e2d6a53dfa4d32d3b7d0f00e99e897" exitCode=0 Feb 17 15:21:53.995633 master-0 kubenswrapper[26425]: E0217 15:21:53.995506 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:21:56.395348 master-0 kubenswrapper[26425]: I0217 15:21:56.395206 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:21:56.396519 master-0 kubenswrapper[26425]: E0217 15:21:56.395536 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:21:56.396519 master-0 kubenswrapper[26425]: E0217 15:21:56.395595 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:21:56.396519 master-0 kubenswrapper[26425]: E0217 15:21:56.395703 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:23:58.395670356 +0000 UTC m=+500.287394214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:21:58.746703 master-0 kubenswrapper[26425]: I0217 15:21:58.746585 26425 generic.go:334] "Generic (PLEG): container finished" podID="c97d328c-95b6-4511-aa90-531ab42b9653" containerID="eac7810e63e39b854e1c16b4c3a8efd314bc8ba25306e76c49cd7325f9e050a2" exitCode=0 Feb 17 15:21:59.891271 master-0 kubenswrapper[26425]: E0217 15:21:59.891163 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:21:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:21:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:21:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:21:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:04.063847 master-0 kubenswrapper[26425]: E0217 15:22:04.063651 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 17 15:22:04.063847 master-0 kubenswrapper[26425]: &Event{ObjectMeta:{kube-controller-manager-master-0.1895119689e7f8dc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:27fd92ef556705625a2e4f1011322252,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused Feb 17 15:22:04.063847 master-0 kubenswrapper[26425]: body: Feb 17 15:22:04.063847 master-0 kubenswrapper[26425]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:06.696605916 +0000 UTC m=+28.588329774,LastTimestamp:2026-02-17 15:16:06.696605916 +0000 UTC m=+28.588329774,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 17 15:22:04.063847 master-0 kubenswrapper[26425]: > Feb 17 15:22:07.838821 master-0 kubenswrapper[26425]: I0217 15:22:07.838698 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/4.log" Feb 17 15:22:07.839692 master-0 kubenswrapper[26425]: I0217 15:22:07.839404 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" exitCode=255 Feb 17 15:22:07.842126 master-0 kubenswrapper[26425]: I0217 15:22:07.842067 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/6.log" Feb 17 15:22:07.843068 master-0 kubenswrapper[26425]: I0217 15:22:07.843006 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/5.log" Feb 17 15:22:07.843853 master-0 kubenswrapper[26425]: I0217 15:22:07.843746 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/4.log" Feb 17 15:22:07.844497 master-0 kubenswrapper[26425]: I0217 15:22:07.844412 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/3.log" Feb 17 15:22:07.844647 master-0 kubenswrapper[26425]: I0217 15:22:07.844513 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a" exitCode=1 Feb 17 15:22:09.891832 master-0 kubenswrapper[26425]: E0217 15:22:09.891677 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:10.625652 master-0 kubenswrapper[26425]: E0217 15:22:10.625564 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:22:10.625948 master-0 kubenswrapper[26425]: E0217 15:22:10.625828 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 17 15:22:10.625948 master-0 kubenswrapper[26425]: I0217 15:22:10.625872 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" containerID="cri-o://e78076928670aead1e74a90bfe18141b9748ba5b397af907cd88d6d09ee87278" Feb 17 15:22:10.625948 master-0 kubenswrapper[26425]: I0217 15:22:10.625890 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:22:10.625948 master-0 kubenswrapper[26425]: I0217 15:22:10.625927 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:22:10.627222 master-0 kubenswrapper[26425]: I0217 15:22:10.627164 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:22:10.627688 master-0 kubenswrapper[26425]: E0217 15:22:10.627641 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:22:10.639373 master-0 kubenswrapper[26425]: I0217 15:22:10.639309 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:22:10.997589 master-0 kubenswrapper[26425]: E0217 15:22:10.997311 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:22:19.893643 master-0 kubenswrapper[26425]: E0217 15:22:19.893044 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:27.998433 master-0 kubenswrapper[26425]: E0217 15:22:27.998355 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:22:29.894103 master-0 kubenswrapper[26425]: E0217 15:22:29.894003 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:34.220059 master-0 kubenswrapper[26425]: I0217 15:22:34.219981 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/4.log" Feb 17 15:22:34.221255 master-0 kubenswrapper[26425]: I0217 15:22:34.221207 26425 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="7556c38c0ce7c0f1754a084197e4432145eeb49bf645ec1bee8c1dc9c0d4a268" exitCode=255 Feb 17 15:22:34.224049 master-0 kubenswrapper[26425]: I0217 15:22:34.224018 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/3.log" Feb 17 15:22:34.224656 master-0 kubenswrapper[26425]: I0217 15:22:34.224624 26425 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="2d1c2b7b658a0650d74a0397ff5fc31a239dc4240eb43135e54d5e15f20a2159" exitCode=255 Feb 17 15:22:34.228259 master-0 kubenswrapper[26425]: I0217 15:22:34.228241 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/4.log" Feb 17 15:22:34.228845 master-0 kubenswrapper[26425]: I0217 15:22:34.228819 26425 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="cd927d8c4044c2b3e7bb267f90872033be717a1ee13eee2ba57f7b0c0267ae94" exitCode=255 Feb 17 15:22:37.256989 master-0 kubenswrapper[26425]: I0217 15:22:37.256746 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/2.log" Feb 17 15:22:37.258053 master-0 kubenswrapper[26425]: I0217 15:22:37.257763 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/1.log" Feb 17 15:22:37.258985 master-0 kubenswrapper[26425]: I0217 15:22:37.258935 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:22:37.259081 master-0 kubenswrapper[26425]: I0217 15:22:37.259005 26425 generic.go:334] "Generic (PLEG): container finished" podID="7307f70e-ee5b-4f81-8155-718a02c9efe7" containerID="cfe1921aeffedf72afcc3d47606c3faa1e4d7dfc111ed225203d93fe2e7c6ebc" exitCode=1 Feb 17 15:22:38.067050 master-0 kubenswrapper[26425]: E0217 15:22:38.066851 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895119689ea6eae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:27fd92ef556705625a2e4f1011322252,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:06.69676715 +0000 UTC m=+28.588490988,LastTimestamp:2026-02-17 15:16:06.69676715 +0000 UTC m=+28.588490988,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:22:39.894704 master-0 kubenswrapper[26425]: E0217 15:22:39.894586 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:39.894704 master-0 kubenswrapper[26425]: E0217 15:22:39.894659 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:22:44.643113 master-0 kubenswrapper[26425]: E0217 15:22:44.643059 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:22:44.643732 master-0 kubenswrapper[26425]: E0217 15:22:44.643254 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 17 15:22:44.643732 master-0 kubenswrapper[26425]: I0217 15:22:44.643403 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:22:44.643732 master-0 kubenswrapper[26425]: I0217 15:22:44.643446 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:22:44.643732 master-0 kubenswrapper[26425]: I0217 15:22:44.643492 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:22:44.643732 master-0 kubenswrapper[26425]: I0217 15:22:44.643513 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerDied","Data":"be8f29548cec98725a9fe2f2e764da4e1fd8b3547c172ac45765b13bbbf51c52"} Feb 17 15:22:44.644000 master-0 kubenswrapper[26425]: I0217 15:22:44.643921 26425 scope.go:117] "RemoveContainer" containerID="1ac9a237c052e7fcf84aea4376a51f8bc274e44722f869b5fc32cf99dd2e4eac" Feb 17 15:22:44.644405 master-0 kubenswrapper[26425]: I0217 15:22:44.644363 26425 scope.go:117] "RemoveContainer" containerID="2d1c2b7b658a0650d74a0397ff5fc31a239dc4240eb43135e54d5e15f20a2159" Feb 17 15:22:44.645213 master-0 kubenswrapper[26425]: I0217 15:22:44.645036 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:22:44.645213 master-0 kubenswrapper[26425]: I0217 15:22:44.645058 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:22:44.645213 master-0 kubenswrapper[26425]: I0217 15:22:44.645084 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:22:44.645809 master-0 kubenswrapper[26425]: I0217 15:22:44.645522 26425 scope.go:117] "RemoveContainer" containerID="590e8fe24ffb416ddbf90918b458930e7fec94c62687bb9e8c21a6053d7a588b" Feb 17 15:22:44.646630 master-0 kubenswrapper[26425]: I0217 15:22:44.645953 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:22:44.646630 master-0 kubenswrapper[26425]: I0217 15:22:44.645982 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:22:44.650995 master-0 kubenswrapper[26425]: I0217 15:22:44.650939 26425 scope.go:117] "RemoveContainer" containerID="7556c38c0ce7c0f1754a084197e4432145eeb49bf645ec1bee8c1dc9c0d4a268" Feb 17 15:22:44.652080 master-0 kubenswrapper[26425]: I0217 15:22:44.651642 26425 scope.go:117] "RemoveContainer" containerID="3d42744bc55ffdd0ef5a58be1827ed2cd005681379705cfa9b05d7d0639649ee" Feb 17 15:22:44.653138 master-0 kubenswrapper[26425]: I0217 15:22:44.653112 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:22:44.653321 master-0 kubenswrapper[26425]: I0217 15:22:44.653280 26425 scope.go:117] "RemoveContainer" containerID="cd927d8c4044c2b3e7bb267f90872033be717a1ee13eee2ba57f7b0c0267ae94" Feb 17 15:22:44.653669 master-0 kubenswrapper[26425]: I0217 15:22:44.653445 26425 scope.go:117] "RemoveContainer" containerID="2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" Feb 17 15:22:44.658504 master-0 kubenswrapper[26425]: I0217 15:22:44.658448 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:22:44.999602 master-0 kubenswrapper[26425]: E0217 15:22:44.999297 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:22:45.332258 master-0 kubenswrapper[26425]: I0217 15:22:45.332090 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/3.log" Feb 17 15:22:45.332784 master-0 kubenswrapper[26425]: I0217 15:22:45.332716 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:22:45.334170 master-0 kubenswrapper[26425]: I0217 15:22:45.334124 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:22:45.345836 master-0 kubenswrapper[26425]: I0217 15:22:45.345768 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:22:45.347273 master-0 kubenswrapper[26425]: I0217 15:22:45.347225 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:22:45.351813 master-0 kubenswrapper[26425]: I0217 15:22:45.351761 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/4.log" Feb 17 15:22:45.359718 master-0 kubenswrapper[26425]: I0217 15:22:45.359664 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/3.log" Feb 17 15:22:45.363728 master-0 kubenswrapper[26425]: I0217 15:22:45.363681 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/4.log" Feb 17 15:22:45.369023 master-0 kubenswrapper[26425]: I0217 15:22:45.368930 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/4.log" Feb 17 15:22:48.789309 master-0 kubenswrapper[26425]: I0217 15:22:48.789170 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:22:48.790166 master-0 kubenswrapper[26425]: I0217 15:22:48.789315 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:22:50.201607 master-0 kubenswrapper[26425]: I0217 15:22:50.201499 26425 status_manager.go:851] "Failed to get status for pod" podUID="c6d23570-21d6-4b08-83fc-8b0827c25313" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods marketplace-operator-6cc5b65c6b-wqxmh)" Feb 17 15:22:58.789248 master-0 kubenswrapper[26425]: I0217 15:22:58.789103 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:22:58.790319 master-0 kubenswrapper[26425]: I0217 15:22:58.789272 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:00.039554 master-0 kubenswrapper[26425]: E0217 15:23:00.039432 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:22:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:22:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:22:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:22:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:02.000902 master-0 kubenswrapper[26425]: E0217 15:23:02.000758 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:23:02.047961 master-0 kubenswrapper[26425]: I0217 15:23:02.047849 26425 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-pjm6n container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": net/http: TLS handshake timeout" start-of-body= Feb 17 15:23:02.048215 master-0 kubenswrapper[26425]: I0217 15:23:02.047976 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": net/http: TLS handshake timeout" Feb 17 15:23:08.586081 master-0 kubenswrapper[26425]: I0217 15:23:08.585988 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/4.log" Feb 17 15:23:08.589488 master-0 kubenswrapper[26425]: I0217 15:23:08.586863 26425 generic.go:334] "Generic (PLEG): container finished" podID="e259b5a1-837b-4cde-85f7-cd5781af08bd" containerID="f839d4a12bad794234a0f2d851c7efe010f9ebd13ec5cf23cda8e2d322859cb0" exitCode=255 Feb 17 15:23:08.590167 master-0 kubenswrapper[26425]: I0217 15:23:08.590103 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-9fpgj_801742a6-3735-4883-9676-e852dc4173d2/csi-snapshot-controller-operator/2.log" Feb 17 15:23:08.590884 master-0 kubenswrapper[26425]: I0217 15:23:08.590808 26425 generic.go:334] "Generic (PLEG): container finished" podID="801742a6-3735-4883-9676-e852dc4173d2" containerID="70326ad5a5e1e4f97a5917f73c6ab82e83c52761bca436e8031565f55dee5d69" exitCode=255 Feb 17 15:23:08.593728 master-0 kubenswrapper[26425]: I0217 15:23:08.593658 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/4.log" Feb 17 15:23:08.594364 master-0 kubenswrapper[26425]: I0217 15:23:08.594299 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="5a16d98391b5a8c270bf73a32b3c23f39afc9a4008644e0c6c54edd2ead6b65e" exitCode=255 Feb 17 15:23:08.597278 master-0 kubenswrapper[26425]: I0217 15:23:08.597182 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-bfm5s_b0f95c87-6a4a-44f2-b6d4-18f167ea430f/service-ca-controller/2.log" Feb 17 15:23:08.597912 master-0 kubenswrapper[26425]: I0217 15:23:08.597848 26425 generic.go:334] "Generic (PLEG): container finished" podID="b0f95c87-6a4a-44f2-b6d4-18f167ea430f" containerID="9f4ff97f78b895ccae3eae818888447c665df48d3e7e4d485d835422e4f11a07" exitCode=255 Feb 17 15:23:08.600590 master-0 kubenswrapper[26425]: I0217 15:23:08.600548 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/4.log" Feb 17 15:23:08.601392 master-0 kubenswrapper[26425]: I0217 15:23:08.601325 26425 generic.go:334] "Generic (PLEG): container finished" podID="553d4535-9985-47e2-83ee-8fcfb6035e7b" containerID="d2876b15b465a0d8ebbe9f55288e61087919a08f0d0e689875fd148be01fd265" exitCode=255 Feb 17 15:23:08.604221 master-0 kubenswrapper[26425]: I0217 15:23:08.604153 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/3.log" Feb 17 15:23:08.605523 master-0 kubenswrapper[26425]: I0217 15:23:08.605414 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e" exitCode=255 Feb 17 15:23:08.608088 master-0 kubenswrapper[26425]: I0217 15:23:08.608039 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/4.log" Feb 17 15:23:08.608839 master-0 kubenswrapper[26425]: I0217 15:23:08.608792 26425 generic.go:334] "Generic (PLEG): container finished" podID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerID="84eef7d05b8afbba3d23598759f5c3487098f70b42806d1e65f876086638833b" exitCode=255 Feb 17 15:23:08.611381 master-0 kubenswrapper[26425]: I0217 15:23:08.611319 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/3.log" Feb 17 15:23:08.612020 master-0 kubenswrapper[26425]: I0217 15:23:08.611960 26425 generic.go:334] "Generic (PLEG): container finished" podID="2b167b7b-2280-4c82-ac78-71c57aebe503" containerID="208ec9a373c676cde3764cb7b974029fd7d1923524fde98c291d6b3440136da0" exitCode=255 Feb 17 15:23:08.614429 master-0 kubenswrapper[26425]: I0217 15:23:08.614385 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/4.log" Feb 17 15:23:08.615342 master-0 kubenswrapper[26425]: I0217 15:23:08.615282 26425 generic.go:334] "Generic (PLEG): container finished" podID="0c58265d-32fb-4cf0-97d8-6c9a5d37fad9" containerID="531b8b8296ba91a17b09acc34a0c28963a357d302bacf35d4690f0ace03ca6e7" exitCode=255 Feb 17 15:23:08.788966 master-0 kubenswrapper[26425]: I0217 15:23:08.788868 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:23:08.788966 master-0 kubenswrapper[26425]: I0217 15:23:08.788955 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:10.040768 master-0 kubenswrapper[26425]: E0217 15:23:10.040631 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: E0217 15:23:12.070214 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: &Event{ObjectMeta:{apiserver-865765995-c58rq.189511952418a1bf openshift-oauth-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-oauth-apiserver,Name:apiserver-865765995-c58rq,UID:124ba199-b79a-4e5c-8512-cc0ae50f73c8,APIVersion:v1,ResourceVersion:7699,FieldPath:spec.containers{oauth-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: body: [+]ping ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [-]etcd failed: reason withheld Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: livez check failed Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:00.693551551 +0000 UTC m=+22.585275429,LastTimestamp:2026-02-17 15:16:09.701620631 +0000 UTC m=+31.593344509,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 17 15:23:12.070380 master-0 kubenswrapper[26425]: > Feb 17 15:23:15.632382 master-0 kubenswrapper[26425]: E0217 15:23:15.632311 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27fd92ef556705625a2e4f1011322252.slice/crio-conmon-9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:23:15.679664 master-0 kubenswrapper[26425]: I0217 15:23:15.679613 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:23:15.680513 master-0 kubenswrapper[26425]: I0217 15:23:15.680437 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/3.log" Feb 17 15:23:15.681348 master-0 kubenswrapper[26425]: I0217 15:23:15.681318 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/2.log" Feb 17 15:23:15.682975 master-0 kubenswrapper[26425]: I0217 15:23:15.682920 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/1.log" Feb 17 15:23:15.684997 master-0 kubenswrapper[26425]: I0217 15:23:15.684949 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:23:15.686377 master-0 kubenswrapper[26425]: I0217 15:23:15.686330 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:23:15.686554 master-0 kubenswrapper[26425]: I0217 15:23:15.686395 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" exitCode=255 Feb 17 15:23:15.689569 master-0 kubenswrapper[26425]: I0217 15:23:15.689520 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/5.log" Feb 17 15:23:15.690342 master-0 kubenswrapper[26425]: I0217 15:23:15.690293 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/4.log" Feb 17 15:23:15.691182 master-0 kubenswrapper[26425]: I0217 15:23:15.691102 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb" exitCode=255 Feb 17 15:23:18.650137 master-0 kubenswrapper[26425]: E0217 15:23:18.650034 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:18.661308 master-0 kubenswrapper[26425]: E0217 15:23:18.661220 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:23:18.661639 master-0 kubenswrapper[26425]: E0217 15:23:18.661582 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.662830 26425 scope.go:117] "RemoveContainer" containerID="eac7810e63e39b854e1c16b4c3a8efd314bc8ba25306e76c49cd7325f9e050a2" Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.662973 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663033 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663075 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"7d5bbe35353878dc65758a0ca44e388ed895cebe20ab313a7b7befbc3305a9c8"} Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663119 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"4fdbb0e3f6f5d5f76b963148da342174a5211018be79b6c667e48791f719b4bf"} Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663153 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerDied","Data":"60c37bbe21721a193105735329bdb72d13d00d18b75bdb6198c01ec145d996cc"} Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663185 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"ef80e89f464f2fddabc8382f1aaea540a66323e02f01f8d399ba62bafcf783cc"} Feb 17 15:23:18.663829 master-0 kubenswrapper[26425]: I0217 15:23:18.663227 26425 scope.go:117] "RemoveContainer" containerID="6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694" Feb 17 15:23:18.665115 master-0 kubenswrapper[26425]: I0217 15:23:18.663910 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:18.665115 master-0 kubenswrapper[26425]: E0217 15:23:18.664130 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:18.665115 master-0 kubenswrapper[26425]: I0217 15:23:18.664219 26425 scope.go:117] "RemoveContainer" containerID="8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a" Feb 17 15:23:18.665115 master-0 kubenswrapper[26425]: I0217 15:23:18.664340 26425 scope.go:117] "RemoveContainer" containerID="5a16d98391b5a8c270bf73a32b3c23f39afc9a4008644e0c6c54edd2ead6b65e" Feb 17 15:23:18.665378 master-0 kubenswrapper[26425]: I0217 15:23:18.665188 26425 scope.go:117] "RemoveContainer" containerID="531b8b8296ba91a17b09acc34a0c28963a357d302bacf35d4690f0ace03ca6e7" Feb 17 15:23:18.665593 master-0 kubenswrapper[26425]: I0217 15:23:18.665549 26425 scope.go:117] "RemoveContainer" containerID="98474fa2fe73c4db5804824208857baff7e2d6a53dfa4d32d3b7d0f00e99e897" Feb 17 15:23:18.669402 master-0 kubenswrapper[26425]: I0217 15:23:18.669339 26425 scope.go:117] "RemoveContainer" containerID="107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e" Feb 17 15:23:18.670276 master-0 kubenswrapper[26425]: I0217 15:23:18.669770 26425 scope.go:117] "RemoveContainer" containerID="cfe1921aeffedf72afcc3d47606c3faa1e4d7dfc111ed225203d93fe2e7c6ebc" Feb 17 15:23:18.670276 master-0 kubenswrapper[26425]: I0217 15:23:18.670034 26425 scope.go:117] "RemoveContainer" containerID="70326ad5a5e1e4f97a5917f73c6ab82e83c52761bca436e8031565f55dee5d69" Feb 17 15:23:18.670724 master-0 kubenswrapper[26425]: I0217 15:23:18.670387 26425 scope.go:117] "RemoveContainer" containerID="9f4ff97f78b895ccae3eae818888447c665df48d3e7e4d485d835422e4f11a07" Feb 17 15:23:18.671623 master-0 kubenswrapper[26425]: I0217 15:23:18.671134 26425 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:18.671623 master-0 kubenswrapper[26425]: I0217 15:23:18.671166 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:18.671623 master-0 kubenswrapper[26425]: I0217 15:23:18.671197 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:23:18.671862 master-0 kubenswrapper[26425]: I0217 15:23:18.671613 26425 scope.go:117] "RemoveContainer" containerID="d2876b15b465a0d8ebbe9f55288e61087919a08f0d0e689875fd148be01fd265" Feb 17 15:23:18.684177 master-0 kubenswrapper[26425]: I0217 15:23:18.684109 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:23:18.729001 master-0 kubenswrapper[26425]: I0217 15:23:18.728970 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:18.729261 master-0 kubenswrapper[26425]: E0217 15:23:18.729235 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:18.801196 master-0 kubenswrapper[26425]: I0217 15:23:18.800016 26425 scope.go:117] "RemoveContainer" containerID="d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f" Feb 17 15:23:18.913419 master-0 kubenswrapper[26425]: I0217 15:23:18.913370 26425 scope.go:117] "RemoveContainer" containerID="ef80e89f464f2fddabc8382f1aaea540a66323e02f01f8d399ba62bafcf783cc" Feb 17 15:23:19.001878 master-0 kubenswrapper[26425]: E0217 15:23:19.001806 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:23:19.740745 master-0 kubenswrapper[26425]: I0217 15:23:19.740690 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-9fpgj_801742a6-3735-4883-9676-e852dc4173d2/csi-snapshot-controller-operator/2.log" Feb 17 15:23:19.744286 master-0 kubenswrapper[26425]: I0217 15:23:19.744250 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/4.log" Feb 17 15:23:19.750382 master-0 kubenswrapper[26425]: I0217 15:23:19.750292 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/2.log" Feb 17 15:23:19.750834 master-0 kubenswrapper[26425]: I0217 15:23:19.750807 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/1.log" Feb 17 15:23:19.751992 master-0 kubenswrapper[26425]: I0217 15:23:19.751940 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/0.log" Feb 17 15:23:19.755528 master-0 kubenswrapper[26425]: I0217 15:23:19.755361 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-bfm5s_b0f95c87-6a4a-44f2-b6d4-18f167ea430f/service-ca-controller/2.log" Feb 17 15:23:19.758624 master-0 kubenswrapper[26425]: I0217 15:23:19.758603 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/3.log" Feb 17 15:23:19.769847 master-0 kubenswrapper[26425]: I0217 15:23:19.769795 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/6.log" Feb 17 15:23:19.773076 master-0 kubenswrapper[26425]: I0217 15:23:19.773015 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/4.log" Feb 17 15:23:19.776565 master-0 kubenswrapper[26425]: I0217 15:23:19.776541 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/4.log" Feb 17 15:23:20.041915 master-0 kubenswrapper[26425]: E0217 15:23:20.041725 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:20.792896 master-0 kubenswrapper[26425]: I0217 15:23:20.792708 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:20.792896 master-0 kubenswrapper[26425]: I0217 15:23:20.792765 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:30.043369 master-0 kubenswrapper[26425]: E0217 15:23:30.043096 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:36.003170 master-0 kubenswrapper[26425]: E0217 15:23:36.003036 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:23:36.946945 master-0 kubenswrapper[26425]: I0217 15:23:36.946825 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/oauth-apiserver/0.log" Feb 17 15:23:36.947883 master-0 kubenswrapper[26425]: I0217 15:23:36.947838 26425 generic.go:334] "Generic (PLEG): container finished" podID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerID="da09e4a5b3dba77dbd04689a11e6d73f307ccd2ac6de0aff2e732163788d68b5" exitCode=137 Feb 17 15:23:37.962200 master-0 kubenswrapper[26425]: I0217 15:23:37.962116 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-865765995-c58rq_124ba199-b79a-4e5c-8512-cc0ae50f73c8/oauth-apiserver/0.log" Feb 17 15:23:40.043788 master-0 kubenswrapper[26425]: E0217 15:23:40.043685 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:40.043788 master-0 kubenswrapper[26425]: E0217 15:23:40.043759 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:23:46.074050 master-0 kubenswrapper[26425]: E0217 15:23:46.073854 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{apiserver-865765995-c58rq.18951195241ab8a5 openshift-oauth-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-oauth-apiserver,Name:apiserver-865765995-c58rq,UID:124ba199-b79a-4e5c-8512-cc0ae50f73c8,APIVersion:v1,ResourceVersion:7699,FieldPath:spec.containers{oauth-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:00.693688485 +0000 UTC m=+22.585412333,LastTimestamp:2026-02-17 15:16:09.701712753 +0000 UTC m=+31.593436601,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:23:50.070747 master-0 kubenswrapper[26425]: I0217 15:23:50.070670 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/7.log" Feb 17 15:23:50.071600 master-0 kubenswrapper[26425]: I0217 15:23:50.071537 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/6.log" Feb 17 15:23:50.071715 master-0 kubenswrapper[26425]: I0217 15:23:50.071635 26425 generic.go:334] "Generic (PLEG): container finished" podID="129dba1e-73df-4ea4-96c0-3eba78d568ba" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" exitCode=1 Feb 17 15:23:50.203884 master-0 kubenswrapper[26425]: I0217 15:23:50.203750 26425 status_manager.go:851] "Failed to get status for pod" podUID="833c8661-28ca-463a-ac61-6edb961056e3" pod="openshift-marketplace/redhat-operators-wzsv7" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-operators-wzsv7)" Feb 17 15:23:51.686687 master-0 kubenswrapper[26425]: I0217 15:23:51.686519 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.41:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:23:51.686687 master-0 kubenswrapper[26425]: I0217 15:23:51.686657 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.41:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:52.006688 master-0 kubenswrapper[26425]: I0217 15:23:52.006452 26425 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-pjm6n container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 17 15:23:52.006688 master-0 kubenswrapper[26425]: I0217 15:23:52.006595 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" podUID="f2546ffc-8d0a-4010-a3bd-9e69b6dbea40" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 17 15:23:52.687857 master-0 kubenswrapper[26425]: E0217 15:23:52.687781 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: E0217 15:23:52.688067 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: I0217 15:23:52.688156 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: I0217 15:23:52.688185 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: I0217 15:23:52.688204 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerDied","Data":"2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608"} Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: I0217 15:23:52.688238 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:52.688672 master-0 kubenswrapper[26425]: I0217 15:23:52.688260 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:52.690403 master-0 kubenswrapper[26425]: I0217 15:23:52.690327 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:52.690863 master-0 kubenswrapper[26425]: E0217 15:23:52.690806 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:52.701802 master-0 kubenswrapper[26425]: I0217 15:23:52.701734 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:23:53.003917 master-0 kubenswrapper[26425]: E0217 15:23:53.003682 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:23:54.796591 master-0 kubenswrapper[26425]: E0217 15:23:54.796510 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:55.130470 master-0 kubenswrapper[26425]: I0217 15:23:55.130409 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:55.130470 master-0 kubenswrapper[26425]: I0217 15:23:55.130468 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:56.009717 master-0 kubenswrapper[26425]: E0217 15:23:56.009643 26425 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.321s" Feb 17 15:23:56.011321 master-0 kubenswrapper[26425]: I0217 15:23:56.011234 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:56.015750 master-0 kubenswrapper[26425]: I0217 15:23:56.015679 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:56.024524 master-0 kubenswrapper[26425]: I0217 15:23:56.021232 26425 scope.go:117] "RemoveContainer" containerID="84eef7d05b8afbba3d23598759f5c3487098f70b42806d1e65f876086638833b" Feb 17 15:23:56.033677 master-0 kubenswrapper[26425]: I0217 15:23:56.032846 26425 scope.go:117] "RemoveContainer" containerID="f839d4a12bad794234a0f2d851c7efe010f9ebd13ec5cf23cda8e2d322859cb0" Feb 17 15:23:56.049311 master-0 kubenswrapper[26425]: I0217 15:23:56.049219 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:23:56.052975 master-0 kubenswrapper[26425]: I0217 15:23:56.052914 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:23:56.052975 master-0 kubenswrapper[26425]: I0217 15:23:56.052952 26425 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="5037e736-56ca-4fed-b6ed-a9bf030f2d40" Feb 17 15:23:56.052975 master-0 kubenswrapper[26425]: I0217 15:23:56.052972 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.052993 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053007 26425 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="5037e736-56ca-4fed-b6ed-a9bf030f2d40" Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053020 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"7b0bc73a19929878c76a20f8913258b82b0659b1d457e21ec06a82cf6b136195"} Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053050 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053071 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053092 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053102 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053118 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:23:56.053247 master-0 kubenswrapper[26425]: I0217 15:23:56.053237 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" containerID="cri-o://21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.053277 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.053480 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" containerID="cri-o://fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.053519 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.053544 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerDied","Data":"426e84564cdde730130665e18be2c56771ee413958b73511ab6a3d57c4226dd6"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.053569 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.054386 26425 scope.go:117] "RemoveContainer" containerID="48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.056437 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: E0217 15:23:56.056921 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057030 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057071 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057126 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057154 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerDied","Data":"e78076928670aead1e74a90bfe18141b9748ba5b397af907cd88d6d09ee87278"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057189 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057205 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057238 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" containerID="cri-o://b86a492f597b80e76da870edbd5aa60b116fd208f8fcff47303644a8e0039f9b" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057252 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057269 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057286 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057306 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057328 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" containerID="cri-o://532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057342 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057388 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057405 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057422 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" event={"ID":"b4422676-9a70-4973-8299-7b40a66e9c96","Type":"ContainerDied","Data":"b1199a6a02a6f0066cde070bc688012a60c6dbb64c28d3d555d30add6fcebc27"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057446 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerDied","Data":"e6582b397c9a839f2d6d03076dc105158f9bf90ad6efb080207cea9f74d8064c"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057514 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057587 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057613 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057637 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057674 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerStarted","Data":"e21db6dc3c89ccc946938faf692a644d12c8c796e73f855223bea13cf801bb39"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057705 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057730 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls" event={"ID":"50c51fe2-32aa-430f-8da0-7cf3b9519131","Type":"ContainerStarted","Data":"607207e00dc12bc841b7df585d738e9d728a6b89c12c4ed654dff61ce4dd9641"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057755 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"1ececdff7bab767f2143a803e3a947996a5650e5f2fd5556fe6d5dfba06a98e0"} Feb 17 15:23:56.057828 master-0 kubenswrapper[26425]: I0217 15:23:56.057780 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060061 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" containerID="cri-o://8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060086 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060114 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c" event={"ID":"14723cb7-2d96-42b7-b559-70386c4c841c","Type":"ContainerStarted","Data":"1a7ef830af3debb6b5ebb3a0ef499314de3207affe832249db9ebce352022c43"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060139 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerDied","Data":"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060170 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060188 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060203 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060218 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a3b6a099-f52a-428a-af09-d1842ce66891","Type":"ContainerDied","Data":"b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060237 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65552bcab35fe164881e8ac001f1baa5fa85be7a3b6063a3edbe790f67bf18a" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060254 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xwftw" event={"ID":"7c6b911d-8db2-48e8-bce9-d4bcde1f55a0","Type":"ContainerStarted","Data":"a99523af44f6e72247992f0a8f1dca88218afb01f6ada6cec2ecc6688f501cc3"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060489 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.060522 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.061577 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.061601 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"70e43034-56d0-4fb2-8886-deb00b625686","Type":"ContainerDied","Data":"5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.061620 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5922fb8c007ad599e40a5354516760730a0cba79810d4b9259cefea52493ddb5" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.061631 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064379 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064411 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" containerID="cri-o://2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064423 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064436 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064502 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064531 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064544 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064556 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7" event={"ID":"b4422676-9a70-4973-8299-7b40a66e9c96","Type":"ContainerStarted","Data":"b018bb71d8280c1e817868acad0e4faa34a49a7a74aedabab78bb472f176658a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064576 26425 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" containerID="cri-o://107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064586 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064597 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm" event={"ID":"68954d1e-2147-4465-9817-a3c04cbc19b0","Type":"ContainerStarted","Data":"5d8e2ed0ffa3e73fc3a51defa64541383cc0d47add3b4a5d8d8d739258cf06ff"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064697 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064712 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"4f3d983d4ccc46ef27a60861c65b81497fdb8faa3d16615f0e7d839d7e92efb0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064726 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" event={"ID":"76d3da23-3347-4a5c-b328-d92671897ecc","Type":"ContainerDied","Data":"cd41dc79695d9c0bd45ab8f72b3cf6af9d3af76fe51f2138f55c128fc6c09071"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064741 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064755 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064767 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064781 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerDied","Data":"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064799 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064811 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064829 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064843 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8123735c457e17ee5d6dd9977728805a83d4fc587f70de79ff52150d929609f" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064853 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064869 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"fcc22a077c839b880ed50e8a8777440b208baa2388423438583030d85d86b3c2"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064881 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"1c33e5c83bf19251c80a45fef1ba806877a1822bc3dfd8bb9cde774bfb9902e7"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064894 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064905 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064916 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064928 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s" event={"ID":"76d3da23-3347-4a5c-b328-d92671897ecc","Type":"ContainerStarted","Data":"5f180195703734f6cdd214605a022e545725acb656a13f3a0f4dac789372d110"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064940 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064954 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064966 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245" event={"ID":"31e31afc-79d5-46f4-9835-0fd11da9465f","Type":"ContainerStarted","Data":"771c5a31ea6b5ffb3f280c8e2bc6887fe34d2b7e4bd2ba788268fe97b7d19ee9"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064980 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerStarted","Data":"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064993 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerDied","Data":"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065008 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065024 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065039 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065052 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065064 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eec33455162a27fe10d4874ae93c26e71a281f59a9f0a675a04a71ca4bfd694" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065074 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" event={"ID":"ba1306f7-029b-4d43-ba3c-5738da9148d6","Type":"ContainerDied","Data":"4ca2a1481cf68af809d23ae9ad2e79b63336d3be01516204a6730a744e080f72"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065090 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" event={"ID":"c8646e5c-c2ce-48e6-b757-58044769f479","Type":"ContainerDied","Data":"da1858700d4dd348bd1bd6965ebad759d727564f2555dd6372efe783d1762809"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065104 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerDied","Data":"bfa4241e9cbb9bb3dc9c0b9ecf26410125b91a6e764bdf4080c3457126bf7fdc"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065119 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerDied","Data":"4c47c374b75591c1874c057cb8609aad6e1b60685643b76979aadb8e2ca53712"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065133 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerDied","Data":"b86a492f597b80e76da870edbd5aa60b116fd208f8fcff47303644a8e0039f9b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065160 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerDied","Data":"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065176 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" event={"ID":"655e4000-0ad4-4349-8c31-e0c952e4be30","Type":"ContainerDied","Data":"a17a8feb8cde32d9f769f1d063cb256b0434b87c2646d32dfbbaf8c558e68235"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065199 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerDied","Data":"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065214 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerDied","Data":"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065227 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b" event={"ID":"33e819b0-5a3f-4c2d-9dc7-8b0231804cdb","Type":"ContainerStarted","Data":"feb940ebfe13e37324b5dd70de3b624d2ec346842b003da7b60c100ca06a6c40"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065240 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd" event={"ID":"187af679-a062-4f41-81f2-33545f76febf","Type":"ContainerStarted","Data":"88d70726e5a6b403e6ab547114e3e1af014a4970cf2dd4a4c6632fa82fa3c344"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065253 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8" event={"ID":"071566ae-a9ae-4aa9-9dc3-38602363be72","Type":"ContainerStarted","Data":"473d966ab12fa493f7e85f53d705a3ca97b893ed268d57093a2c9216689cd89b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065265 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"cd927d8c4044c2b3e7bb267f90872033be717a1ee13eee2ba57f7b0c0267ae94"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065277 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"7556c38c0ce7c0f1754a084197e4432145eeb49bf645ec1bee8c1dc9c0d4a268"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065291 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"2d1c2b7b658a0650d74a0397ff5fc31a239dc4240eb43135e54d5e15f20a2159"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065302 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065314 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerDied","Data":"3d42744bc55ffdd0ef5a58be1827ed2cd005681379705cfa9b05d7d0639649ee"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065328 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065342 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" event={"ID":"da06cfcb-7c78-4022-96b1-d858853f5adc","Type":"ContainerDied","Data":"d6df48814b566ca92cfa0739d561cf9daa945b55707b972a933430e336c6c185"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065356 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"590e8fe24ffb416ddbf90918b458930e7fec94c62687bb9e8c21a6053d7a588b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065370 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerDied","Data":"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065385 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerDied","Data":"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065401 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065416 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065429 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerDied","Data":"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065445 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerDied","Data":"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065465 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerDied","Data":"1ac9a237c052e7fcf84aea4376a51f8bc274e44722f869b5fc32cf99dd2e4eac"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065507 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerDied","Data":"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065527 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.064826 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: E0217 15:23:56.065673 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065543 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerStarted","Data":"9f4ff97f78b895ccae3eae818888447c665df48d3e7e4d485d835422e4f11a07"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065718 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"531b8b8296ba91a17b09acc34a0c28963a357d302bacf35d4690f0ace03ca6e7"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065732 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"ea8fbc46bfc67699ac8dc3657e5080093940cd8742c87627ba3d795ee12841ab"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065743 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f" event={"ID":"ba1306f7-029b-4d43-ba3c-5738da9148d6","Type":"ContainerStarted","Data":"80d5c03257ee806d7afaa6663f6de1a86ed23e0e58cf312df0584f8d701b7bd6"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065755 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"a8988cec11fd110131ab62b289c0ff6085ef1250cc85630f2ae1bdbdb0bbfda2"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065767 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"cfe1921aeffedf72afcc3d47606c3faa1e4d7dfc111ed225203d93fe2e7c6ebc"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065776 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr" event={"ID":"c8646e5c-c2ce-48e6-b757-58044769f479","Type":"ContainerStarted","Data":"861da7cdda9ae5883d778e76cb92f3911a21fc3aa6e631023327aa0ff2f35437"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065786 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerStarted","Data":"70326ad5a5e1e4f97a5917f73c6ab82e83c52761bca436e8031565f55dee5d69"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065795 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"5a16d98391b5a8c270bf73a32b3c23f39afc9a4008644e0c6c54edd2ead6b65e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065804 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065813 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"d2876b15b465a0d8ebbe9f55288e61087919a08f0d0e689875fd148be01fd265"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065823 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065832 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95" event={"ID":"da06cfcb-7c78-4022-96b1-d858853f5adc","Type":"ContainerStarted","Data":"5cbbb7096e9dc9e0753c825a99589e2cc5b77dc61fc88ba56450877b30ac2c91"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065842 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065852 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"84eef7d05b8afbba3d23598759f5c3487098f70b42806d1e65f876086638833b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065863 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"f839d4a12bad794234a0f2d851c7efe010f9ebd13ec5cf23cda8e2d322859cb0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065873 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"208ec9a373c676cde3764cb7b974029fd7d1923524fde98c291d6b3440136da0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065883 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz" event={"ID":"655e4000-0ad4-4349-8c31-e0c952e4be30","Type":"ContainerStarted","Data":"ecb95d6c7002988d45c620ca760c6feee1bf859ceffd9c9feb16c9ed0b63f484"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065902 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" event={"ID":"626c4f7a-59ee-45da-9198-05dd2c42ac42","Type":"ContainerDied","Data":"98474fa2fe73c4db5804824208857baff7e2d6a53dfa4d32d3b7d0f00e99e897"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065913 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" event={"ID":"c97d328c-95b6-4511-aa90-531ab42b9653","Type":"ContainerDied","Data":"eac7810e63e39b854e1c16b4c3a8efd314bc8ba25306e76c49cd7325f9e050a2"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065926 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065939 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065952 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"7556c38c0ce7c0f1754a084197e4432145eeb49bf645ec1bee8c1dc9c0d4a268"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065963 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerDied","Data":"2d1c2b7b658a0650d74a0397ff5fc31a239dc4240eb43135e54d5e15f20a2159"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065974 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"cd927d8c4044c2b3e7bb267f90872033be717a1ee13eee2ba57f7b0c0267ae94"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065986 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerDied","Data":"cfe1921aeffedf72afcc3d47606c3faa1e4d7dfc111ed225203d93fe2e7c6ebc"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.065996 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066006 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066016 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"791f9a484e234a522a9f297e07558fcfa77e1f430f413d3a61b2ecdb9365bba9"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066025 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"9b9529f533442e60085af4a06f581e2d4277aa76e9723aebd9a3b8d15dff4b94"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066035 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066044 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066053 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerStarted","Data":"ebeee1ed8df2ced7072050f55f36637ce13a597413eb26643d6054d220ad114e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066071 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066081 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerDied","Data":"f839d4a12bad794234a0f2d851c7efe010f9ebd13ec5cf23cda8e2d322859cb0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066091 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerDied","Data":"70326ad5a5e1e4f97a5917f73c6ab82e83c52761bca436e8031565f55dee5d69"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066104 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"5a16d98391b5a8c270bf73a32b3c23f39afc9a4008644e0c6c54edd2ead6b65e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066115 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerDied","Data":"9f4ff97f78b895ccae3eae818888447c665df48d3e7e4d485d835422e4f11a07"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066125 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerDied","Data":"d2876b15b465a0d8ebbe9f55288e61087919a08f0d0e689875fd148be01fd265"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066136 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerDied","Data":"84eef7d05b8afbba3d23598759f5c3487098f70b42806d1e65f876086638833b"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066157 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerDied","Data":"208ec9a373c676cde3764cb7b974029fd7d1923524fde98c291d6b3440136da0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066169 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerDied","Data":"531b8b8296ba91a17b09acc34a0c28963a357d302bacf35d4690f0ace03ca6e7"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066179 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066191 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066201 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066207 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066212 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066220 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066227 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066234 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066242 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7" event={"ID":"626c4f7a-59ee-45da-9198-05dd2c42ac42","Type":"ContainerStarted","Data":"a1ac763b8e40b6b4b3d47f0332da3eddc3bbcecd72366fd0876f09b9ba38ad67"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066250 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj" event={"ID":"801742a6-3735-4883-9676-e852dc4173d2","Type":"ContainerStarted","Data":"fe04fcd097f96256e9c4b61b737ced33d3e2c41f9bcc010cd372b4b77a37ef2a"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066259 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066268 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066278 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-bfm5s" event={"ID":"b0f95c87-6a4a-44f2-b6d4-18f167ea430f","Type":"ContainerStarted","Data":"637f8cd48a2819cc2c2d7806162f4e2c529c7123b4f7b79263f372cfe1a6829c"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066287 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066296 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc" event={"ID":"c97d328c-95b6-4511-aa90-531ab42b9653","Type":"ContainerStarted","Data":"d8307ac667fd1cff339e26bef8fdf9cdd6c432c29e1b550af9a8c10d2c0439b5"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066304 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"4c326361c53da9b164d451f6f20b2c2d6b557ffdd4890d790a2120671588d571"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066313 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"7f5fa2fd8b86dcc76f5f8db42ce4a84cea3489354466166fd015fddbbf7830b7"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066323 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066332 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9" event={"ID":"553d4535-9985-47e2-83ee-8fcfb6035e7b","Type":"ContainerStarted","Data":"3dc295ef54363205b271c7148ad409b471739faa82af6108c888b3b4a2757b1d"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066340 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph" event={"ID":"0c58265d-32fb-4cf0-97d8-6c9a5d37fad9","Type":"ContainerStarted","Data":"a06ce40e155e930f3ecc3356d522e7021da088b7c511776ccb35d4cc3e8cfb63"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066350 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"c1736535ec340986245669656144415b83d6fce53edf1f4eba618ac35a0d45b0"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066359 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"7388a0d819456db7d3915f9832712ae80d721419be93885b9efb4361ba23c41d"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066367 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"c77a612b8158f2e7d529ad97d4435070231683f4215b6e7a9d276923b9c979f6"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066380 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" event={"ID":"124ba199-b79a-4e5c-8512-cc0ae50f73c8","Type":"ContainerDied","Data":"da09e4a5b3dba77dbd04689a11e6d73f307ccd2ac6de0aff2e732163788d68b5"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066392 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" event={"ID":"124ba199-b79a-4e5c-8512-cc0ae50f73c8","Type":"ContainerStarted","Data":"b47c04b8ee2295a924300b9f6a95335f34b1e6a11d1802dd8a39f3a84542eddf"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066402 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerDied","Data":"09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f"} Feb 17 15:23:56.067088 master-0 kubenswrapper[26425]: I0217 15:23:56.066411 26425 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a"} Feb 17 15:23:56.083298 master-0 kubenswrapper[26425]: I0217 15:23:56.068321 26425 scope.go:117] "RemoveContainer" containerID="208ec9a373c676cde3764cb7b974029fd7d1923524fde98c291d6b3440136da0" Feb 17 15:23:56.083298 master-0 kubenswrapper[26425]: I0217 15:23:56.081840 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 17 15:23:56.150861 master-0 kubenswrapper[26425]: I0217 15:23:56.150530 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:23:56.152959 master-0 kubenswrapper[26425]: I0217 15:23:56.152924 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:56.153084 master-0 kubenswrapper[26425]: I0217 15:23:56.153059 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:23:56.153274 master-0 kubenswrapper[26425]: I0217 15:23:56.153223 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:23:56.153351 master-0 kubenswrapper[26425]: E0217 15:23:56.153297 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:56.153506 master-0 kubenswrapper[26425]: I0217 15:23:56.153311 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:23:56.153553 master-0 kubenswrapper[26425]: E0217 15:23:56.153242 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:23:56.155056 master-0 kubenswrapper[26425]: I0217 15:23:56.155024 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:56.155056 master-0 kubenswrapper[26425]: I0217 15:23:56.155049 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="56b915f9-7034-4957-846c-ef83087a4288" Feb 17 15:23:56.204413 master-0 kubenswrapper[26425]: I0217 15:23:56.204369 26425 scope.go:117] "RemoveContainer" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.248589 master-0 kubenswrapper[26425]: I0217 15:23:56.248475 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.296359 master-0 kubenswrapper[26425]: I0217 15:23:56.296327 26425 scope.go:117] "RemoveContainer" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:23:56.328700 master-0 kubenswrapper[26425]: I0217 15:23:56.328661 26425 scope.go:117] "RemoveContainer" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" Feb 17 15:23:56.385717 master-0 kubenswrapper[26425]: I0217 15:23:56.385667 26425 scope.go:117] "RemoveContainer" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" Feb 17 15:23:56.409588 master-0 kubenswrapper[26425]: I0217 15:23:56.409539 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: E0217 15:23:56.410136 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": container with ID starting with ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4 not found: ID does not exist" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: I0217 15:23:56.410188 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} err="failed to get container status \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": rpc error: code = NotFound desc = could not find container \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": container with ID starting with ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4 not found: ID does not exist" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: I0217 15:23:56.410217 26425 scope.go:117] "RemoveContainer" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: E0217 15:23:56.410526 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": container with ID starting with dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970 not found: ID does not exist" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: I0217 15:23:56.410549 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} err="failed to get container status \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": rpc error: code = NotFound desc = could not find container \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": container with ID starting with dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970 not found: ID does not exist" Feb 17 15:23:56.410594 master-0 kubenswrapper[26425]: I0217 15:23:56.410565 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.412804 master-0 kubenswrapper[26425]: E0217 15:23:56.412772 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": container with ID starting with 542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764 not found: ID does not exist" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.412940 master-0 kubenswrapper[26425]: I0217 15:23:56.412808 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} err="failed to get container status \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": rpc error: code = NotFound desc = could not find container \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": container with ID starting with 542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764 not found: ID does not exist" Feb 17 15:23:56.412940 master-0 kubenswrapper[26425]: I0217 15:23:56.412832 26425 scope.go:117] "RemoveContainer" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:23:56.413130 master-0 kubenswrapper[26425]: E0217 15:23:56.413104 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": container with ID starting with 9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0 not found: ID does not exist" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:23:56.413185 master-0 kubenswrapper[26425]: I0217 15:23:56.413130 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} err="failed to get container status \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": rpc error: code = NotFound desc = could not find container \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": container with ID starting with 9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0 not found: ID does not exist" Feb 17 15:23:56.413185 master-0 kubenswrapper[26425]: I0217 15:23:56.413146 26425 scope.go:117] "RemoveContainer" containerID="1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a" Feb 17 15:23:56.445688 master-0 kubenswrapper[26425]: I0217 15:23:56.445659 26425 scope.go:117] "RemoveContainer" containerID="50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e" Feb 17 15:23:56.472748 master-0 kubenswrapper[26425]: I0217 15:23:56.471911 26425 scope.go:117] "RemoveContainer" containerID="afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48" Feb 17 15:23:56.497885 master-0 kubenswrapper[26425]: I0217 15:23:56.497690 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.498374 master-0 kubenswrapper[26425]: I0217 15:23:56.498325 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} err="failed to get container status \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": rpc error: code = NotFound desc = could not find container \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": container with ID starting with ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4 not found: ID does not exist" Feb 17 15:23:56.498510 master-0 kubenswrapper[26425]: I0217 15:23:56.498382 26425 scope.go:117] "RemoveContainer" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.498890 master-0 kubenswrapper[26425]: I0217 15:23:56.498848 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} err="failed to get container status \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": rpc error: code = NotFound desc = could not find container \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": container with ID starting with dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970 not found: ID does not exist" Feb 17 15:23:56.498890 master-0 kubenswrapper[26425]: I0217 15:23:56.498876 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.499300 master-0 kubenswrapper[26425]: I0217 15:23:56.499276 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} err="failed to get container status \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": rpc error: code = NotFound desc = could not find container \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": container with ID starting with 542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764 not found: ID does not exist" Feb 17 15:23:56.499369 master-0 kubenswrapper[26425]: I0217 15:23:56.499295 26425 scope.go:117] "RemoveContainer" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:23:56.499805 master-0 kubenswrapper[26425]: I0217 15:23:56.499756 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} err="failed to get container status \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": rpc error: code = NotFound desc = could not find container \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": container with ID starting with 9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0 not found: ID does not exist" Feb 17 15:23:56.499805 master-0 kubenswrapper[26425]: I0217 15:23:56.499786 26425 scope.go:117] "RemoveContainer" containerID="2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" Feb 17 15:23:56.523256 master-0 kubenswrapper[26425]: I0217 15:23:56.523216 26425 scope.go:117] "RemoveContainer" containerID="8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" Feb 17 15:23:56.552487 master-0 kubenswrapper[26425]: I0217 15:23:56.552424 26425 scope.go:117] "RemoveContainer" containerID="1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a" Feb 17 15:23:56.575867 master-0 kubenswrapper[26425]: I0217 15:23:56.575823 26425 scope.go:117] "RemoveContainer" containerID="8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5" Feb 17 15:23:56.599604 master-0 kubenswrapper[26425]: I0217 15:23:56.599570 26425 scope.go:117] "RemoveContainer" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" Feb 17 15:23:56.601900 master-0 kubenswrapper[26425]: E0217 15:23:56.601847 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690\": container with ID starting with e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690 not found: ID does not exist" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" Feb 17 15:23:56.601988 master-0 kubenswrapper[26425]: I0217 15:23:56.601909 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690"} err="failed to get container status \"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690\": rpc error: code = NotFound desc = could not find container \"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690\": container with ID starting with e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690 not found: ID does not exist" Feb 17 15:23:56.601988 master-0 kubenswrapper[26425]: I0217 15:23:56.601957 26425 scope.go:117] "RemoveContainer" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" Feb 17 15:23:56.602489 master-0 kubenswrapper[26425]: E0217 15:23:56.602448 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099\": container with ID starting with 6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099 not found: ID does not exist" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" Feb 17 15:23:56.602552 master-0 kubenswrapper[26425]: I0217 15:23:56.602489 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099"} err="failed to get container status \"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099\": rpc error: code = NotFound desc = could not find container \"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099\": container with ID starting with 6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099 not found: ID does not exist" Feb 17 15:23:56.602552 master-0 kubenswrapper[26425]: I0217 15:23:56.602502 26425 scope.go:117] "RemoveContainer" containerID="e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036" Feb 17 15:23:56.629161 master-0 kubenswrapper[26425]: I0217 15:23:56.628668 26425 scope.go:117] "RemoveContainer" containerID="61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3" Feb 17 15:23:56.653990 master-0 kubenswrapper[26425]: I0217 15:23:56.653371 26425 scope.go:117] "RemoveContainer" containerID="532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" Feb 17 15:23:56.683648 master-0 kubenswrapper[26425]: I0217 15:23:56.683456 26425 scope.go:117] "RemoveContainer" containerID="e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0" Feb 17 15:23:56.715125 master-0 kubenswrapper[26425]: I0217 15:23:56.715076 26425 scope.go:117] "RemoveContainer" containerID="dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa" Feb 17 15:23:56.758863 master-0 kubenswrapper[26425]: I0217 15:23:56.756622 26425 scope.go:117] "RemoveContainer" containerID="0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470" Feb 17 15:23:56.794831 master-0 kubenswrapper[26425]: I0217 15:23:56.794666 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.794640427 podStartE2EDuration="794.640427ms" podCreationTimestamp="2026-02-17 15:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:23:56.792084485 +0000 UTC m=+498.683808303" watchObservedRunningTime="2026-02-17 15:23:56.794640427 +0000 UTC m=+498.686364285" Feb 17 15:23:56.797121 master-0 kubenswrapper[26425]: I0217 15:23:56.797074 26425 scope.go:117] "RemoveContainer" containerID="397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e" Feb 17 15:23:56.824288 master-0 kubenswrapper[26425]: I0217 15:23:56.824240 26425 scope.go:117] "RemoveContainer" containerID="ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4" Feb 17 15:23:56.824918 master-0 kubenswrapper[26425]: I0217 15:23:56.824885 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4"} err="failed to get container status \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": rpc error: code = NotFound desc = could not find container \"ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4\": container with ID starting with ca463ef7de9494bc6accd84c1d2a52efc66901e37dee8515089357c8779e16b4 not found: ID does not exist" Feb 17 15:23:56.825061 master-0 kubenswrapper[26425]: I0217 15:23:56.825042 26425 scope.go:117] "RemoveContainer" containerID="dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970" Feb 17 15:23:56.825623 master-0 kubenswrapper[26425]: I0217 15:23:56.825581 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970"} err="failed to get container status \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": rpc error: code = NotFound desc = could not find container \"dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970\": container with ID starting with dea56c453bd1d9080845c742d0a82a5e0015c21698600fc1eb93441698908970 not found: ID does not exist" Feb 17 15:23:56.825755 master-0 kubenswrapper[26425]: I0217 15:23:56.825739 26425 scope.go:117] "RemoveContainer" containerID="542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764" Feb 17 15:23:56.826283 master-0 kubenswrapper[26425]: I0217 15:23:56.826236 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764"} err="failed to get container status \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": rpc error: code = NotFound desc = could not find container \"542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764\": container with ID starting with 542e26dd11db463392a268dee2a09680d2bc095b74c259e5abc9fad7a8520764 not found: ID does not exist" Feb 17 15:23:56.826412 master-0 kubenswrapper[26425]: I0217 15:23:56.826396 26425 scope.go:117] "RemoveContainer" containerID="9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0" Feb 17 15:23:56.827868 master-0 kubenswrapper[26425]: I0217 15:23:56.827825 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0"} err="failed to get container status \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": rpc error: code = NotFound desc = could not find container \"9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0\": container with ID starting with 9c6a976f578178dce385b7335c12eeeae1b904fb4cbd297f737f1890f2d2f6d0 not found: ID does not exist" Feb 17 15:23:56.828004 master-0 kubenswrapper[26425]: I0217 15:23:56.827987 26425 scope.go:117] "RemoveContainer" containerID="2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" Feb 17 15:23:56.828500 master-0 kubenswrapper[26425]: E0217 15:23:56.828437 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a\": container with ID starting with 2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a not found: ID does not exist" containerID="2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a" Feb 17 15:23:56.828650 master-0 kubenswrapper[26425]: I0217 15:23:56.828616 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a"} err="failed to get container status \"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a\": rpc error: code = NotFound desc = could not find container \"2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a\": container with ID starting with 2d8e9c7cc7ce25b105e16a7e29ac0e038e0555039d2a3d7f7f949a7152aa307a not found: ID does not exist" Feb 17 15:23:56.828762 master-0 kubenswrapper[26425]: I0217 15:23:56.828747 26425 scope.go:117] "RemoveContainer" containerID="8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" Feb 17 15:23:56.829260 master-0 kubenswrapper[26425]: E0217 15:23:56.829219 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4\": container with ID starting with 8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4 not found: ID does not exist" containerID="8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4" Feb 17 15:23:56.829394 master-0 kubenswrapper[26425]: I0217 15:23:56.829373 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4"} err="failed to get container status \"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4\": rpc error: code = NotFound desc = could not find container \"8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4\": container with ID starting with 8c91e52c0bffd71b1d402b7407e49fa1b2b0ea7c5b17f48e1de871ae6836ffa4 not found: ID does not exist" Feb 17 15:23:56.829565 master-0 kubenswrapper[26425]: I0217 15:23:56.829547 26425 scope.go:117] "RemoveContainer" containerID="8444e61e0a1d073b9d65f699d27fabb5a7a087bae3f88d3d6591a10e39f9c52a" Feb 17 15:23:56.862833 master-0 kubenswrapper[26425]: I0217 15:23:56.862796 26425 scope.go:117] "RemoveContainer" containerID="afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48" Feb 17 15:23:56.863715 master-0 kubenswrapper[26425]: E0217 15:23:56.863646 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48\": container with ID starting with afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48 not found: ID does not exist" containerID="afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48" Feb 17 15:23:56.863952 master-0 kubenswrapper[26425]: I0217 15:23:56.863891 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48"} err="failed to get container status \"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48\": rpc error: code = NotFound desc = could not find container \"afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48\": container with ID starting with afb6acf2a5178774fc88b9857020ac3a9778d76f3535d0f37b9711d4fea47c48 not found: ID does not exist" Feb 17 15:23:56.864144 master-0 kubenswrapper[26425]: I0217 15:23:56.864120 26425 scope.go:117] "RemoveContainer" containerID="1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a" Feb 17 15:23:56.864994 master-0 kubenswrapper[26425]: E0217 15:23:56.864916 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a\": container with ID starting with 1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a not found: ID does not exist" containerID="1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a" Feb 17 15:23:56.865243 master-0 kubenswrapper[26425]: I0217 15:23:56.865188 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a"} err="failed to get container status \"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a\": rpc error: code = NotFound desc = could not find container \"1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a\": container with ID starting with 1cfd0ad488c82b15998a7888c979dda06fa4a01761beb9e5d6d35b295908c57a not found: ID does not exist" Feb 17 15:23:56.865414 master-0 kubenswrapper[26425]: I0217 15:23:56.865388 26425 scope.go:117] "RemoveContainer" containerID="50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e" Feb 17 15:23:56.866146 master-0 kubenswrapper[26425]: E0217 15:23:56.866052 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e\": container with ID starting with 50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e not found: ID does not exist" containerID="50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e" Feb 17 15:23:56.866426 master-0 kubenswrapper[26425]: I0217 15:23:56.866357 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e"} err="failed to get container status \"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e\": rpc error: code = NotFound desc = could not find container \"50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e\": container with ID starting with 50d813c00eb4ee20e7e4a0770f94362bd89a3e9a431dc0d899c42e55cc8f993e not found: ID does not exist" Feb 17 15:23:56.866634 master-0 kubenswrapper[26425]: I0217 15:23:56.866609 26425 scope.go:117] "RemoveContainer" containerID="e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690" Feb 17 15:23:56.867246 master-0 kubenswrapper[26425]: I0217 15:23:56.867190 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690"} err="failed to get container status \"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690\": rpc error: code = NotFound desc = could not find container \"e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690\": container with ID starting with e60b9af6d044290e2e17466ed96a7b0446f918fbcd458aba5cd6128266f78690 not found: ID does not exist" Feb 17 15:23:56.867246 master-0 kubenswrapper[26425]: I0217 15:23:56.867247 26425 scope.go:117] "RemoveContainer" containerID="6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099" Feb 17 15:23:56.867849 master-0 kubenswrapper[26425]: I0217 15:23:56.867759 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099"} err="failed to get container status \"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099\": rpc error: code = NotFound desc = could not find container \"6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099\": container with ID starting with 6d6f6efe5446b1ad9f59416c6288254af00beb71db12ff92866670ff0b7d9099 not found: ID does not exist" Feb 17 15:23:56.868096 master-0 kubenswrapper[26425]: I0217 15:23:56.868066 26425 scope.go:117] "RemoveContainer" containerID="0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470" Feb 17 15:23:56.868719 master-0 kubenswrapper[26425]: E0217 15:23:56.868635 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470\": container with ID starting with 0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470 not found: ID does not exist" containerID="0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470" Feb 17 15:23:56.868896 master-0 kubenswrapper[26425]: I0217 15:23:56.868701 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470"} err="failed to get container status \"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470\": rpc error: code = NotFound desc = could not find container \"0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470\": container with ID starting with 0b8262975cf51c409ae05462f6db811ce0d8908ad2a83500403ab60076ef6470 not found: ID does not exist" Feb 17 15:23:56.868896 master-0 kubenswrapper[26425]: I0217 15:23:56.868742 26425 scope.go:117] "RemoveContainer" containerID="397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e" Feb 17 15:23:56.869389 master-0 kubenswrapper[26425]: E0217 15:23:56.869283 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e\": container with ID starting with 397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e not found: ID does not exist" containerID="397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e" Feb 17 15:23:56.869574 master-0 kubenswrapper[26425]: I0217 15:23:56.869372 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e"} err="failed to get container status \"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e\": rpc error: code = NotFound desc = could not find container \"397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e\": container with ID starting with 397fbf5ccf990e80c088873d4e4e76e21d50aac3d21cada9a0e4b497c3afd20e not found: ID does not exist" Feb 17 15:23:56.869574 master-0 kubenswrapper[26425]: I0217 15:23:56.869432 26425 scope.go:117] "RemoveContainer" containerID="e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0" Feb 17 15:23:56.870113 master-0 kubenswrapper[26425]: E0217 15:23:56.870032 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0\": container with ID starting with e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0 not found: ID does not exist" containerID="e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0" Feb 17 15:23:56.870344 master-0 kubenswrapper[26425]: I0217 15:23:56.870288 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0"} err="failed to get container status \"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0\": rpc error: code = NotFound desc = could not find container \"e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0\": container with ID starting with e6c4e604cd376c77d1ad67bda0d96a444c6b00840760cb0d36d61ad455656dd0 not found: ID does not exist" Feb 17 15:23:56.870548 master-0 kubenswrapper[26425]: I0217 15:23:56.870515 26425 scope.go:117] "RemoveContainer" containerID="61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3" Feb 17 15:23:56.872773 master-0 kubenswrapper[26425]: E0217 15:23:56.872725 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3\": container with ID starting with 61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3 not found: ID does not exist" containerID="61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3" Feb 17 15:23:56.872881 master-0 kubenswrapper[26425]: I0217 15:23:56.872776 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3"} err="failed to get container status \"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3\": rpc error: code = NotFound desc = could not find container \"61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3\": container with ID starting with 61b2318958d23ebdf6e3bca6a8a2b1ccba3a4aa509b4a359e7fb8a050a5801c3 not found: ID does not exist" Feb 17 15:23:56.872881 master-0 kubenswrapper[26425]: I0217 15:23:56.872812 26425 scope.go:117] "RemoveContainer" containerID="e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036" Feb 17 15:23:56.873290 master-0 kubenswrapper[26425]: E0217 15:23:56.873239 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036\": container with ID starting with e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036 not found: ID does not exist" containerID="e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036" Feb 17 15:23:56.873290 master-0 kubenswrapper[26425]: I0217 15:23:56.873279 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036"} err="failed to get container status \"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036\": rpc error: code = NotFound desc = could not find container \"e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036\": container with ID starting with e5a73638e40c519ad84123382ac658619b9dc2d362942e0bd81784b6f5c9f036 not found: ID does not exist" Feb 17 15:23:56.873482 master-0 kubenswrapper[26425]: I0217 15:23:56.873302 26425 scope.go:117] "RemoveContainer" containerID="532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" Feb 17 15:23:56.873611 master-0 kubenswrapper[26425]: E0217 15:23:56.873576 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2\": container with ID starting with 532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2 not found: ID does not exist" containerID="532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2" Feb 17 15:23:56.873720 master-0 kubenswrapper[26425]: I0217 15:23:56.873614 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2"} err="failed to get container status \"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2\": rpc error: code = NotFound desc = could not find container \"532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2\": container with ID starting with 532e13d86043cf03e79537b7223ceabdbcdf6100bfe944f35eb6876ce0a808a2 not found: ID does not exist" Feb 17 15:23:56.873720 master-0 kubenswrapper[26425]: I0217 15:23:56.873638 26425 scope.go:117] "RemoveContainer" containerID="1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a" Feb 17 15:23:56.874179 master-0 kubenswrapper[26425]: E0217 15:23:56.874141 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a\": container with ID starting with 1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a not found: ID does not exist" containerID="1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a" Feb 17 15:23:56.874292 master-0 kubenswrapper[26425]: I0217 15:23:56.874186 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a"} err="failed to get container status \"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a\": rpc error: code = NotFound desc = could not find container \"1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a\": container with ID starting with 1cf423e31a88736056f1999dcd941a944e9de281f289a68cb4692796b704d37a not found: ID does not exist" Feb 17 15:23:56.874292 master-0 kubenswrapper[26425]: I0217 15:23:56.874217 26425 scope.go:117] "RemoveContainer" containerID="dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa" Feb 17 15:23:56.874796 master-0 kubenswrapper[26425]: E0217 15:23:56.874761 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa\": container with ID starting with dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa not found: ID does not exist" containerID="dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa" Feb 17 15:23:56.874912 master-0 kubenswrapper[26425]: I0217 15:23:56.874802 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa"} err="failed to get container status \"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa\": rpc error: code = NotFound desc = could not find container \"dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa\": container with ID starting with dfe6ffb450b0904261ab46cf367ace40b648e6342b7e1df240b49e249ecafeaa not found: ID does not exist" Feb 17 15:23:56.874912 master-0 kubenswrapper[26425]: I0217 15:23:56.874827 26425 scope.go:117] "RemoveContainer" containerID="8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5" Feb 17 15:23:56.875272 master-0 kubenswrapper[26425]: E0217 15:23:56.875189 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5\": container with ID starting with 8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5 not found: ID does not exist" containerID="8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5" Feb 17 15:23:56.875371 master-0 kubenswrapper[26425]: I0217 15:23:56.875281 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5"} err="failed to get container status \"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5\": rpc error: code = NotFound desc = could not find container \"8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5\": container with ID starting with 8c3de091b26b63488ddbcb0fd31c122edf5d7a587d35c169e265f4e9d06987b5 not found: ID does not exist" Feb 17 15:23:57.160933 master-0 kubenswrapper[26425]: I0217 15:23:57.160861 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-wcpf8_2b167b7b-2280-4c82-ac78-71c57aebe503/kube-scheduler-operator-container/3.log" Feb 17 15:23:57.161645 master-0 kubenswrapper[26425]: I0217 15:23:57.161038 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8" event={"ID":"2b167b7b-2280-4c82-ac78-71c57aebe503","Type":"ContainerStarted","Data":"4c66299a6969acc16a9d9fc57151f10b7974bcb9a9d059a07307b38d44618a71"} Feb 17 15:23:57.163882 master-0 kubenswrapper[26425]: I0217 15:23:57.163843 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-9fpgj_801742a6-3735-4883-9676-e852dc4173d2/csi-snapshot-controller-operator/2.log" Feb 17 15:23:57.167047 master-0 kubenswrapper[26425]: I0217 15:23:57.166586 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/3.log" Feb 17 15:23:57.172509 master-0 kubenswrapper[26425]: I0217 15:23:57.171049 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/4.log" Feb 17 15:23:57.180052 master-0 kubenswrapper[26425]: I0217 15:23:57.174427 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-pjm6n_f2546ffc-8d0a-4010-a3bd-9e69b6dbea40/etcd-operator/4.log" Feb 17 15:23:57.180052 master-0 kubenswrapper[26425]: I0217 15:23:57.174657 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n" event={"ID":"f2546ffc-8d0a-4010-a3bd-9e69b6dbea40","Type":"ContainerStarted","Data":"2e48b19e4a81c705b7e77362f8ab339c9e49e25150ae4e8a5003eb4b688da226"} Feb 17 15:23:57.180052 master-0 kubenswrapper[26425]: I0217 15:23:57.177403 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/4.log" Feb 17 15:23:57.180052 master-0 kubenswrapper[26425]: I0217 15:23:57.179974 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/2.log" Feb 17 15:23:57.186670 master-0 kubenswrapper[26425]: I0217 15:23:57.186623 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-bfm5s_b0f95c87-6a4a-44f2-b6d4-18f167ea430f/service-ca-controller/2.log" Feb 17 15:23:57.189850 master-0 kubenswrapper[26425]: I0217 15:23:57.189804 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-tckph_0c58265d-32fb-4cf0-97d8-6c9a5d37fad9/kube-storage-version-migrator-operator/4.log" Feb 17 15:23:57.192688 master-0 kubenswrapper[26425]: I0217 15:23:57.192649 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/3.log" Feb 17 15:23:57.194394 master-0 kubenswrapper[26425]: I0217 15:23:57.194355 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:23:57.194459 master-0 kubenswrapper[26425]: I0217 15:23:57.194410 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:23:57.197088 master-0 kubenswrapper[26425]: I0217 15:23:57.197060 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/5.log" Feb 17 15:23:57.197176 master-0 kubenswrapper[26425]: I0217 15:23:57.197146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5"} Feb 17 15:23:57.197506 master-0 kubenswrapper[26425]: I0217 15:23:57.197450 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:23:57.202687 master-0 kubenswrapper[26425]: I0217 15:23:57.202640 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/7.log" Feb 17 15:23:57.203254 master-0 kubenswrapper[26425]: I0217 15:23:57.203225 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:23:57.203440 master-0 kubenswrapper[26425]: E0217 15:23:57.203412 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:23:57.205161 master-0 kubenswrapper[26425]: I0217 15:23:57.205128 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-xvzq9_553d4535-9985-47e2-83ee-8fcfb6035e7b/kube-controller-manager-operator/4.log" Feb 17 15:23:57.207516 master-0 kubenswrapper[26425]: I0217 15:23:57.207493 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/4.log" Feb 17 15:23:57.209417 master-0 kubenswrapper[26425]: I0217 15:23:57.209375 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:23:57.211084 master-0 kubenswrapper[26425]: I0217 15:23:57.211046 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:23:57.211789 master-0 kubenswrapper[26425]: I0217 15:23:57.211749 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:23:57.213004 master-0 kubenswrapper[26425]: I0217 15:23:57.212965 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:23:57.213466 master-0 kubenswrapper[26425]: E0217 15:23:57.213390 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:23:57.214494 master-0 kubenswrapper[26425]: I0217 15:23:57.214416 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-p5mdv_e259b5a1-837b-4cde-85f7-cd5781af08bd/kube-apiserver-operator/4.log" Feb 17 15:23:57.214781 master-0 kubenswrapper[26425]: I0217 15:23:57.214685 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv" event={"ID":"e259b5a1-837b-4cde-85f7-cd5781af08bd","Type":"ContainerStarted","Data":"1e8ed5c6ca95af9d91908600a33f9c3b68d63aa447e608a612d92b29b6ee2ac6"} Feb 17 15:23:58.176118 master-0 kubenswrapper[26425]: I0217 15:23:58.176061 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:23:58.177133 master-0 kubenswrapper[26425]: I0217 15:23:58.177071 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:23:58.197597 master-0 kubenswrapper[26425]: I0217 15:23:58.197517 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:23:58.197864 master-0 kubenswrapper[26425]: I0217 15:23:58.197825 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:23:58.402906 master-0 kubenswrapper[26425]: I0217 15:23:58.402802 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:23:58.402906 master-0 kubenswrapper[26425]: I0217 15:23:58.402889 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:23:58.467526 master-0 kubenswrapper[26425]: I0217 15:23:58.467269 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:23:58.467860 master-0 kubenswrapper[26425]: E0217 15:23:58.467519 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:23:58.467860 master-0 kubenswrapper[26425]: E0217 15:23:58.467567 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:23:58.467860 master-0 kubenswrapper[26425]: E0217 15:23:58.467653 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:26:00.467626819 +0000 UTC m=+622.359350667 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:23:59.224848 master-0 kubenswrapper[26425]: I0217 15:23:59.224745 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": context deadline exceeded" start-of-body= Feb 17 15:23:59.224848 master-0 kubenswrapper[26425]: I0217 15:23:59.224846 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": context deadline exceeded" Feb 17 15:24:00.164497 master-0 kubenswrapper[26425]: E0217 15:24:00.164374 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:23:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:23:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:23:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:23:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:00.225996 master-0 kubenswrapper[26425]: I0217 15:24:00.225896 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:00.225996 master-0 kubenswrapper[26425]: I0217 15:24:00.225991 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:01.176501 master-0 kubenswrapper[26425]: I0217 15:24:01.176403 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:01.176811 master-0 kubenswrapper[26425]: I0217 15:24:01.176577 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:01.403338 master-0 kubenswrapper[26425]: I0217 15:24:01.403246 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:01.403338 master-0 kubenswrapper[26425]: I0217 15:24:01.403322 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:01.687085 master-0 kubenswrapper[26425]: I0217 15:24:01.686964 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.41:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:01.687413 master-0 kubenswrapper[26425]: I0217 15:24:01.687079 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.41:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:03.248559 master-0 kubenswrapper[26425]: I0217 15:24:03.248417 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Feb 17 15:24:03.248559 master-0 kubenswrapper[26425]: I0217 15:24:03.248537 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Feb 17 15:24:03.249626 master-0 kubenswrapper[26425]: I0217 15:24:03.248608 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:24:03.249626 master-0 kubenswrapper[26425]: I0217 15:24:03.249271 26425 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f"} pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 17 15:24:03.249626 master-0 kubenswrapper[26425]: I0217 15:24:03.249332 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" containerID="cri-o://3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f" gracePeriod=30 Feb 17 15:24:04.175766 master-0 kubenswrapper[26425]: I0217 15:24:04.175524 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:04.175766 master-0 kubenswrapper[26425]: I0217 15:24:04.175667 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:04.280330 master-0 kubenswrapper[26425]: I0217 15:24:04.280207 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/5.log" Feb 17 15:24:04.281691 master-0 kubenswrapper[26425]: I0217 15:24:04.280922 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/4.log" Feb 17 15:24:04.281691 master-0 kubenswrapper[26425]: I0217 15:24:04.280985 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f" exitCode=255 Feb 17 15:24:04.281691 master-0 kubenswrapper[26425]: I0217 15:24:04.281026 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f"} Feb 17 15:24:04.281691 master-0 kubenswrapper[26425]: I0217 15:24:04.281078 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"22539d2581158c802b8841cfdcf177e262bdfa4c577e4e31ddc8ccb2193f1a9b"} Feb 17 15:24:04.281691 master-0 kubenswrapper[26425]: I0217 15:24:04.281109 26425 scope.go:117] "RemoveContainer" containerID="5a16d98391b5a8c270bf73a32b3c23f39afc9a4008644e0c6c54edd2ead6b65e" Feb 17 15:24:04.402269 master-0 kubenswrapper[26425]: I0217 15:24:04.402179 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:04.402269 master-0 kubenswrapper[26425]: I0217 15:24:04.402248 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:04.409277 master-0 kubenswrapper[26425]: I0217 15:24:04.409219 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:24:04.410212 master-0 kubenswrapper[26425]: I0217 15:24:04.410154 26425 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 17 15:24:04.410325 master-0 kubenswrapper[26425]: I0217 15:24:04.410226 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" containerID="cri-o://843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21" gracePeriod=30 Feb 17 15:24:04.410325 master-0 kubenswrapper[26425]: I0217 15:24:04.410248 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:04.410819 master-0 kubenswrapper[26425]: I0217 15:24:04.410319 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:05.292971 master-0 kubenswrapper[26425]: I0217 15:24:05.292843 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/4.log" Feb 17 15:24:05.293977 master-0 kubenswrapper[26425]: I0217 15:24:05.293935 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/3.log" Feb 17 15:24:05.294611 master-0 kubenswrapper[26425]: I0217 15:24:05.294561 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21" exitCode=255 Feb 17 15:24:05.294669 master-0 kubenswrapper[26425]: I0217 15:24:05.294617 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21"} Feb 17 15:24:05.294710 master-0 kubenswrapper[26425]: I0217 15:24:05.294678 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f"} Feb 17 15:24:05.294750 master-0 kubenswrapper[26425]: I0217 15:24:05.294710 26425 scope.go:117] "RemoveContainer" containerID="107a5a083d9624ea5d741fb13e3ff30f66dfa53967ad5245600160a1d329de8e" Feb 17 15:24:05.294994 master-0 kubenswrapper[26425]: I0217 15:24:05.294952 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:24:05.299846 master-0 kubenswrapper[26425]: I0217 15:24:05.298954 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/5.log" Feb 17 15:24:06.311820 master-0 kubenswrapper[26425]: I0217 15:24:06.311743 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/4.log" Feb 17 15:24:07.176105 master-0 kubenswrapper[26425]: I0217 15:24:07.176002 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:07.176105 master-0 kubenswrapper[26425]: I0217 15:24:07.176093 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:07.402496 master-0 kubenswrapper[26425]: I0217 15:24:07.402347 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:07.402496 master-0 kubenswrapper[26425]: I0217 15:24:07.402430 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:08.406152 master-0 kubenswrapper[26425]: I0217 15:24:08.406057 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:24:08.407262 master-0 kubenswrapper[26425]: E0217 15:24:08.406428 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:24:09.934367 master-0 kubenswrapper[26425]: I0217 15:24:09.934241 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:09.935136 master-0 kubenswrapper[26425]: I0217 15:24:09.934362 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:10.005167 master-0 kubenswrapper[26425]: E0217 15:24:10.005030 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:24:10.165314 master-0 kubenswrapper[26425]: E0217 15:24:10.165206 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:10.176059 master-0 kubenswrapper[26425]: I0217 15:24:10.176002 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:10.176201 master-0 kubenswrapper[26425]: I0217 15:24:10.176073 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:10.395656 master-0 kubenswrapper[26425]: I0217 15:24:10.395530 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:24:10.396161 master-0 kubenswrapper[26425]: E0217 15:24:10.396089 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:24:10.402561 master-0 kubenswrapper[26425]: I0217 15:24:10.402505 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:10.402754 master-0 kubenswrapper[26425]: I0217 15:24:10.402601 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:11.687724 master-0 kubenswrapper[26425]: I0217 15:24:11.687642 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.41:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:11.688621 master-0 kubenswrapper[26425]: I0217 15:24:11.687741 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.41:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:13.175798 master-0 kubenswrapper[26425]: I0217 15:24:13.175711 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:13.176672 master-0 kubenswrapper[26425]: I0217 15:24:13.175815 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:13.403052 master-0 kubenswrapper[26425]: I0217 15:24:13.402969 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:13.403424 master-0 kubenswrapper[26425]: I0217 15:24:13.403049 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:13.403424 master-0 kubenswrapper[26425]: I0217 15:24:13.403126 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:24:13.404152 master-0 kubenswrapper[26425]: I0217 15:24:13.404067 26425 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-fcnqs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Feb 17 15:24:13.404304 master-0 kubenswrapper[26425]: I0217 15:24:13.404083 26425 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 17 15:24:13.404377 master-0 kubenswrapper[26425]: I0217 15:24:13.404293 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" containerID="cri-o://1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" gracePeriod=30 Feb 17 15:24:13.404449 master-0 kubenswrapper[26425]: I0217 15:24:13.404157 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Feb 17 15:24:14.040546 master-0 kubenswrapper[26425]: E0217 15:24:14.039307 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:24:14.383289 master-0 kubenswrapper[26425]: I0217 15:24:14.383191 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/5.log" Feb 17 15:24:14.384211 master-0 kubenswrapper[26425]: I0217 15:24:14.383855 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/4.log" Feb 17 15:24:14.384602 master-0 kubenswrapper[26425]: I0217 15:24:14.384523 26425 generic.go:334] "Generic (PLEG): container finished" podID="61d90bf3-02df-48c8-b2ec-09a1653b0800" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" exitCode=255 Feb 17 15:24:14.384602 master-0 kubenswrapper[26425]: I0217 15:24:14.384585 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerDied","Data":"1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f"} Feb 17 15:24:14.384789 master-0 kubenswrapper[26425]: I0217 15:24:14.384625 26425 scope.go:117] "RemoveContainer" containerID="843c0766067ae62a5438b56b1dc0dad8c3a9cf03062c4b3a0754c4c08fcb6a21" Feb 17 15:24:14.385321 master-0 kubenswrapper[26425]: I0217 15:24:14.385258 26425 scope.go:117] "RemoveContainer" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" Feb 17 15:24:14.385723 master-0 kubenswrapper[26425]: E0217 15:24:14.385635 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:24:15.397159 master-0 kubenswrapper[26425]: I0217 15:24:15.397047 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/5.log" Feb 17 15:24:16.303283 master-0 kubenswrapper[26425]: I0217 15:24:16.303211 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: I0217 15:24:16.310252 26425 patch_prober.go:28] interesting pod/apiserver-865765995-c58rq container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]log ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]etcd excluded: ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]etcd-readiness excluded: ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [-]informer-sync failed: reason withheld Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: [+]shutdown ok Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: readyz check failed Feb 17 15:24:16.310365 master-0 kubenswrapper[26425]: I0217 15:24:16.310351 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" podUID="124ba199-b79a-4e5c-8512-cc0ae50f73c8" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:24:16.411091 master-0 kubenswrapper[26425]: I0217 15:24:16.411033 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/5.log" Feb 17 15:24:16.411895 master-0 kubenswrapper[26425]: I0217 15:24:16.411603 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/4.log" Feb 17 15:24:16.411895 master-0 kubenswrapper[26425]: I0217 15:24:16.411631 26425 generic.go:334] "Generic (PLEG): container finished" podID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" containerID="97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a" exitCode=255 Feb 17 15:24:16.411895 master-0 kubenswrapper[26425]: I0217 15:24:16.411673 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerDied","Data":"97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a"} Feb 17 15:24:16.411895 master-0 kubenswrapper[26425]: I0217 15:24:16.411701 26425 scope.go:117] "RemoveContainer" containerID="cd927d8c4044c2b3e7bb267f90872033be717a1ee13eee2ba57f7b0c0267ae94" Feb 17 15:24:16.412290 master-0 kubenswrapper[26425]: I0217 15:24:16.412235 26425 scope.go:117] "RemoveContainer" containerID="97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a" Feb 17 15:24:16.412603 master-0 kubenswrapper[26425]: E0217 15:24:16.412556 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" podUID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" Feb 17 15:24:16.414441 master-0 kubenswrapper[26425]: I0217 15:24:16.414381 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qbmw5_ad81b5bd-2f97-4e7e-a12b-746998fa59f2/cluster-storage-operator/1.log" Feb 17 15:24:16.415207 master-0 kubenswrapper[26425]: I0217 15:24:16.415117 26425 generic.go:334] "Generic (PLEG): container finished" podID="ad81b5bd-2f97-4e7e-a12b-746998fa59f2" containerID="ebeee1ed8df2ced7072050f55f36637ce13a597413eb26643d6054d220ad114e" exitCode=255 Feb 17 15:24:16.415325 master-0 kubenswrapper[26425]: I0217 15:24:16.415261 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerDied","Data":"ebeee1ed8df2ced7072050f55f36637ce13a597413eb26643d6054d220ad114e"} Feb 17 15:24:16.416167 master-0 kubenswrapper[26425]: I0217 15:24:16.416116 26425 scope.go:117] "RemoveContainer" containerID="ebeee1ed8df2ced7072050f55f36637ce13a597413eb26643d6054d220ad114e" Feb 17 15:24:16.416638 master-0 kubenswrapper[26425]: E0217 15:24:16.416585 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-75b869db96-qbmw5_openshift-cluster-storage-operator(ad81b5bd-2f97-4e7e-a12b-746998fa59f2)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" podUID="ad81b5bd-2f97-4e7e-a12b-746998fa59f2" Feb 17 15:24:16.419826 master-0 kubenswrapper[26425]: I0217 15:24:16.419772 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/5.log" Feb 17 15:24:16.421064 master-0 kubenswrapper[26425]: I0217 15:24:16.420993 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/4.log" Feb 17 15:24:16.421180 master-0 kubenswrapper[26425]: I0217 15:24:16.421071 26425 generic.go:334] "Generic (PLEG): container finished" podID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" containerID="dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420" exitCode=255 Feb 17 15:24:16.421180 master-0 kubenswrapper[26425]: I0217 15:24:16.421167 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerDied","Data":"dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420"} Feb 17 15:24:16.421923 master-0 kubenswrapper[26425]: I0217 15:24:16.421866 26425 scope.go:117] "RemoveContainer" containerID="dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420" Feb 17 15:24:16.422314 master-0 kubenswrapper[26425]: E0217 15:24:16.422252 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" podUID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" Feb 17 15:24:16.426635 master-0 kubenswrapper[26425]: I0217 15:24:16.426583 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/3.log" Feb 17 15:24:16.428181 master-0 kubenswrapper[26425]: I0217 15:24:16.428124 26425 generic.go:334] "Generic (PLEG): container finished" podID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" containerID="791f9a484e234a522a9f297e07558fcfa77e1f430f413d3a61b2ecdb9365bba9" exitCode=255 Feb 17 15:24:16.428322 master-0 kubenswrapper[26425]: I0217 15:24:16.428191 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerDied","Data":"791f9a484e234a522a9f297e07558fcfa77e1f430f413d3a61b2ecdb9365bba9"} Feb 17 15:24:16.428725 master-0 kubenswrapper[26425]: I0217 15:24:16.428684 26425 scope.go:117] "RemoveContainer" containerID="791f9a484e234a522a9f297e07558fcfa77e1f430f413d3a61b2ecdb9365bba9" Feb 17 15:24:16.428962 master-0 kubenswrapper[26425]: E0217 15:24:16.428919 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-55b69c6c48-mzk89_openshift-cluster-olm-operator(6c734c89-515e-4ff0-82d1-831ddaf0b99e)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" podUID="6c734c89-515e-4ff0-82d1-831ddaf0b99e" Feb 17 15:24:16.430665 master-0 kubenswrapper[26425]: I0217 15:24:16.430617 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/3.log" Feb 17 15:24:16.431176 master-0 kubenswrapper[26425]: I0217 15:24:16.431126 26425 generic.go:334] "Generic (PLEG): container finished" podID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" containerID="9b9529f533442e60085af4a06f581e2d4277aa76e9723aebd9a3b8d15dff4b94" exitCode=255 Feb 17 15:24:16.431321 master-0 kubenswrapper[26425]: I0217 15:24:16.431208 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerDied","Data":"9b9529f533442e60085af4a06f581e2d4277aa76e9723aebd9a3b8d15dff4b94"} Feb 17 15:24:16.431845 master-0 kubenswrapper[26425]: I0217 15:24:16.431798 26425 scope.go:117] "RemoveContainer" containerID="9b9529f533442e60085af4a06f581e2d4277aa76e9723aebd9a3b8d15dff4b94" Feb 17 15:24:16.432130 master-0 kubenswrapper[26425]: E0217 15:24:16.432082 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-6fcf4c966-l24cg_openshift-network-operator(4fd2c79d-1e10-4f09-8a33-c66598abc99a)\"" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" podUID="4fd2c79d-1e10-4f09-8a33-c66598abc99a" Feb 17 15:24:16.434345 master-0 kubenswrapper[26425]: I0217 15:24:16.434307 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/4.log" Feb 17 15:24:16.435204 master-0 kubenswrapper[26425]: I0217 15:24:16.435153 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/3.log" Feb 17 15:24:16.435352 master-0 kubenswrapper[26425]: I0217 15:24:16.435213 26425 generic.go:334] "Generic (PLEG): container finished" podID="af61bda0-c7b4-489d-a671-eaa5299942fe" containerID="222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398" exitCode=255 Feb 17 15:24:16.435352 master-0 kubenswrapper[26425]: I0217 15:24:16.435248 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerDied","Data":"222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398"} Feb 17 15:24:16.435765 master-0 kubenswrapper[26425]: I0217 15:24:16.435721 26425 scope.go:117] "RemoveContainer" containerID="222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398" Feb 17 15:24:16.436118 master-0 kubenswrapper[26425]: E0217 15:24:16.436068 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-5f5g9_openshift-apiserver-operator(af61bda0-c7b4-489d-a671-eaa5299942fe)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" podUID="af61bda0-c7b4-489d-a671-eaa5299942fe" Feb 17 15:24:16.464505 master-0 kubenswrapper[26425]: I0217 15:24:16.464431 26425 scope.go:117] "RemoveContainer" containerID="1ac9a237c052e7fcf84aea4376a51f8bc274e44722f869b5fc32cf99dd2e4eac" Feb 17 15:24:16.509202 master-0 kubenswrapper[26425]: I0217 15:24:16.509158 26425 scope.go:117] "RemoveContainer" containerID="7556c38c0ce7c0f1754a084197e4432145eeb49bf645ec1bee8c1dc9c0d4a268" Feb 17 15:24:16.548241 master-0 kubenswrapper[26425]: I0217 15:24:16.548204 26425 scope.go:117] "RemoveContainer" containerID="590e8fe24ffb416ddbf90918b458930e7fec94c62687bb9e8c21a6053d7a588b" Feb 17 15:24:16.585756 master-0 kubenswrapper[26425]: I0217 15:24:16.585710 26425 scope.go:117] "RemoveContainer" containerID="3d42744bc55ffdd0ef5a58be1827ed2cd005681379705cfa9b05d7d0639649ee" Feb 17 15:24:16.622443 master-0 kubenswrapper[26425]: I0217 15:24:16.622403 26425 scope.go:117] "RemoveContainer" containerID="2d1c2b7b658a0650d74a0397ff5fc31a239dc4240eb43135e54d5e15f20a2159" Feb 17 15:24:17.368787 master-0 kubenswrapper[26425]: I0217 15:24:17.368724 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-865765995-c58rq" Feb 17 15:24:17.451005 master-0 kubenswrapper[26425]: I0217 15:24:17.450943 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/3.log" Feb 17 15:24:17.456620 master-0 kubenswrapper[26425]: I0217 15:24:17.456555 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/3.log" Feb 17 15:24:17.461439 master-0 kubenswrapper[26425]: I0217 15:24:17.461380 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/4.log" Feb 17 15:24:17.465398 master-0 kubenswrapper[26425]: I0217 15:24:17.465341 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/5.log" Feb 17 15:24:17.468715 master-0 kubenswrapper[26425]: I0217 15:24:17.468655 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qbmw5_ad81b5bd-2f97-4e7e-a12b-746998fa59f2/cluster-storage-operator/1.log" Feb 17 15:24:17.472770 master-0 kubenswrapper[26425]: I0217 15:24:17.472702 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/5.log" Feb 17 15:24:19.494305 master-0 kubenswrapper[26425]: I0217 15:24:19.494190 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/3.log" Feb 17 15:24:19.495428 master-0 kubenswrapper[26425]: I0217 15:24:19.494908 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/2.log" Feb 17 15:24:19.495724 master-0 kubenswrapper[26425]: I0217 15:24:19.495649 26425 generic.go:334] "Generic (PLEG): container finished" podID="7307f70e-ee5b-4f81-8155-718a02c9efe7" containerID="f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e" exitCode=1 Feb 17 15:24:19.495858 master-0 kubenswrapper[26425]: I0217 15:24:19.495707 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerDied","Data":"f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e"} Feb 17 15:24:19.496617 master-0 kubenswrapper[26425]: I0217 15:24:19.496524 26425 scope.go:117] "RemoveContainer" containerID="cfe1921aeffedf72afcc3d47606c3faa1e4d7dfc111ed225203d93fe2e7c6ebc" Feb 17 15:24:19.496922 master-0 kubenswrapper[26425]: I0217 15:24:19.496717 26425 scope.go:117] "RemoveContainer" containerID="f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e" Feb 17 15:24:19.497264 master-0 kubenswrapper[26425]: E0217 15:24:19.497207 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-8qkdw_openshift-machine-api(7307f70e-ee5b-4f81-8155-718a02c9efe7)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" podUID="7307f70e-ee5b-4f81-8155-718a02c9efe7" Feb 17 15:24:19.933674 master-0 kubenswrapper[26425]: I0217 15:24:19.933565 26425 patch_prober.go:28] interesting pod/route-controller-manager-6978b88779-vp5tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:19.933674 master-0 kubenswrapper[26425]: I0217 15:24:19.933660 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:20.081722 master-0 kubenswrapper[26425]: E0217 15:24:20.081531 26425 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 17 15:24:20.081722 master-0 kubenswrapper[26425]: &Event{ObjectMeta:{kube-controller-manager-master-0.18951197d72a4b55 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:27fd92ef556705625a2e4f1011322252,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused Feb 17 15:24:20.081722 master-0 kubenswrapper[26425]: body: Feb 17 15:24:20.081722 master-0 kubenswrapper[26425]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:16:12.287765333 +0000 UTC m=+34.179489191,LastTimestamp:2026-02-17 15:16:12.287765333 +0000 UTC m=+34.179489191,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 17 15:24:20.081722 master-0 kubenswrapper[26425]: > Feb 17 15:24:20.395500 master-0 kubenswrapper[26425]: I0217 15:24:20.395417 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:24:20.395797 master-0 kubenswrapper[26425]: E0217 15:24:20.395765 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:24:20.509098 master-0 kubenswrapper[26425]: I0217 15:24:20.509048 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/3.log" Feb 17 15:24:25.394973 master-0 kubenswrapper[26425]: I0217 15:24:25.394880 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:24:25.395903 master-0 kubenswrapper[26425]: E0217 15:24:25.395402 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(27fd92ef556705625a2e4f1011322252)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" Feb 17 15:24:26.690389 master-0 kubenswrapper[26425]: E0217 15:24:26.690303 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-conmon-5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:24:27.006812 master-0 kubenswrapper[26425]: E0217 15:24:27.006646 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 17 15:24:27.395047 master-0 kubenswrapper[26425]: I0217 15:24:27.394997 26425 scope.go:117] "RemoveContainer" containerID="97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a" Feb 17 15:24:27.395387 master-0 kubenswrapper[26425]: E0217 15:24:27.395342 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" podUID="65d9f008-7777-48fe-85fe-9d54a7bbcea9" Feb 17 15:24:27.395387 master-0 kubenswrapper[26425]: I0217 15:24:27.395369 26425 scope.go:117] "RemoveContainer" containerID="dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420" Feb 17 15:24:27.395668 master-0 kubenswrapper[26425]: I0217 15:24:27.395613 26425 scope.go:117] "RemoveContainer" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" Feb 17 15:24:27.395891 master-0 kubenswrapper[26425]: E0217 15:24:27.395716 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" podUID="c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda" Feb 17 15:24:27.395973 master-0 kubenswrapper[26425]: E0217 15:24:27.395925 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:24:27.396086 master-0 kubenswrapper[26425]: I0217 15:24:27.396034 26425 scope.go:117] "RemoveContainer" containerID="9b9529f533442e60085af4a06f581e2d4277aa76e9723aebd9a3b8d15dff4b94" Feb 17 15:24:27.578111 master-0 kubenswrapper[26425]: I0217 15:24:27.578047 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/6.log" Feb 17 15:24:27.579025 master-0 kubenswrapper[26425]: I0217 15:24:27.578886 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/5.log" Feb 17 15:24:27.579025 master-0 kubenswrapper[26425]: I0217 15:24:27.578973 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" exitCode=255 Feb 17 15:24:27.579241 master-0 kubenswrapper[26425]: I0217 15:24:27.579047 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5"} Feb 17 15:24:27.579241 master-0 kubenswrapper[26425]: I0217 15:24:27.579147 26425 scope.go:117] "RemoveContainer" containerID="48660aeb121e3afca86e76e0585a7448d6608d882760614af031560341b50acb" Feb 17 15:24:27.580063 master-0 kubenswrapper[26425]: I0217 15:24:27.580018 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:24:27.580482 master-0 kubenswrapper[26425]: E0217 15:24:27.580371 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:24:28.402492 master-0 kubenswrapper[26425]: I0217 15:24:28.402391 26425 scope.go:117] "RemoveContainer" containerID="791f9a484e234a522a9f297e07558fcfa77e1f430f413d3a61b2ecdb9365bba9" Feb 17 15:24:28.404196 master-0 kubenswrapper[26425]: I0217 15:24:28.403501 26425 scope.go:117] "RemoveContainer" containerID="ebeee1ed8df2ced7072050f55f36637ce13a597413eb26643d6054d220ad114e" Feb 17 15:24:28.590001 master-0 kubenswrapper[26425]: I0217 15:24:28.589942 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-l24cg_4fd2c79d-1e10-4f09-8a33-c66598abc99a/network-operator/3.log" Feb 17 15:24:28.590161 master-0 kubenswrapper[26425]: I0217 15:24:28.590047 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-l24cg" event={"ID":"4fd2c79d-1e10-4f09-8a33-c66598abc99a","Type":"ContainerStarted","Data":"66d51fb763499b155e690ff964d6803b5735dc63cfc49c9e961af798af9e6303"} Feb 17 15:24:28.594657 master-0 kubenswrapper[26425]: I0217 15:24:28.594618 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/6.log" Feb 17 15:24:29.605574 master-0 kubenswrapper[26425]: I0217 15:24:29.605512 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-mzk89_6c734c89-515e-4ff0-82d1-831ddaf0b99e/cluster-olm-operator/3.log" Feb 17 15:24:29.606688 master-0 kubenswrapper[26425]: I0217 15:24:29.606635 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89" event={"ID":"6c734c89-515e-4ff0-82d1-831ddaf0b99e","Type":"ContainerStarted","Data":"2912fe58e43806b55041920d990fbcc1b5ca091ae72f71959b2f943f742cd1d2"} Feb 17 15:24:29.610719 master-0 kubenswrapper[26425]: I0217 15:24:29.610650 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qbmw5_ad81b5bd-2f97-4e7e-a12b-746998fa59f2/cluster-storage-operator/1.log" Feb 17 15:24:29.610865 master-0 kubenswrapper[26425]: I0217 15:24:29.610755 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5" event={"ID":"ad81b5bd-2f97-4e7e-a12b-746998fa59f2","Type":"ContainerStarted","Data":"92257649e37b9336669875a832f514e1c0610107ece9b1fc2aac54df1e10d740"} Feb 17 15:24:30.680500 master-0 kubenswrapper[26425]: I0217 15:24:30.680386 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:24:30.681574 master-0 kubenswrapper[26425]: I0217 15:24:30.681334 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:24:30.682166 master-0 kubenswrapper[26425]: E0217 15:24:30.681790 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:24:31.395446 master-0 kubenswrapper[26425]: I0217 15:24:31.395352 26425 scope.go:117] "RemoveContainer" containerID="222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398" Feb 17 15:24:31.395836 master-0 kubenswrapper[26425]: E0217 15:24:31.395773 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-5f5g9_openshift-apiserver-operator(af61bda0-c7b4-489d-a671-eaa5299942fe)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" podUID="af61bda0-c7b4-489d-a671-eaa5299942fe" Feb 17 15:24:34.246924 master-0 kubenswrapper[26425]: I0217 15:24:34.246804 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:34.246924 master-0 kubenswrapper[26425]: I0217 15:24:34.246906 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:34.395384 master-0 kubenswrapper[26425]: I0217 15:24:34.395319 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:24:34.395667 master-0 kubenswrapper[26425]: I0217 15:24:34.395406 26425 scope.go:117] "RemoveContainer" containerID="f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e" Feb 17 15:24:34.395872 master-0 kubenswrapper[26425]: E0217 15:24:34.395825 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-8qkdw_openshift-machine-api(7307f70e-ee5b-4f81-8155-718a02c9efe7)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" podUID="7307f70e-ee5b-4f81-8155-718a02c9efe7" Feb 17 15:24:34.396007 master-0 kubenswrapper[26425]: E0217 15:24:34.395967 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:24:37.395493 master-0 kubenswrapper[26425]: I0217 15:24:37.395379 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:24:38.395490 master-0 kubenswrapper[26425]: I0217 15:24:38.395390 26425 scope.go:117] "RemoveContainer" containerID="97c7d1e0883b3fdcedaa0802bb44e77ee85b44b0655f418a7b30b8f804cf346a" Feb 17 15:24:38.693145 master-0 kubenswrapper[26425]: I0217 15:24:38.692946 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:24:38.695742 master-0 kubenswrapper[26425]: I0217 15:24:38.695679 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:24:38.697039 master-0 kubenswrapper[26425]: I0217 15:24:38.696981 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:24:38.697251 master-0 kubenswrapper[26425]: I0217 15:24:38.697196 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"efbfaa97348e69f0a49f3b2b302caecfbc9e14afd8c93921c11c9974de1b8c57"} Feb 17 15:24:38.700823 master-0 kubenswrapper[26425]: I0217 15:24:38.700731 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-sg75p_65d9f008-7777-48fe-85fe-9d54a7bbcea9/service-ca-operator/5.log" Feb 17 15:24:38.700823 master-0 kubenswrapper[26425]: I0217 15:24:38.700822 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p" event={"ID":"65d9f008-7777-48fe-85fe-9d54a7bbcea9","Type":"ContainerStarted","Data":"5150c2b884f061e2fc51bdd89741c37ad383d3de15bccdf987ebba78eba0573f"} Feb 17 15:24:40.395512 master-0 kubenswrapper[26425]: I0217 15:24:40.395421 26425 scope.go:117] "RemoveContainer" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" Feb 17 15:24:40.396347 master-0 kubenswrapper[26425]: E0217 15:24:40.395845 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:24:41.395430 master-0 kubenswrapper[26425]: I0217 15:24:41.395316 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:24:41.395819 master-0 kubenswrapper[26425]: E0217 15:24:41.395763 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:24:42.396480 master-0 kubenswrapper[26425]: I0217 15:24:42.396374 26425 scope.go:117] "RemoveContainer" containerID="dbfe48540d94bc09fa7669965647ccc7762fcc46eaf37642f2147996cadba420" Feb 17 15:24:42.734783 master-0 kubenswrapper[26425]: I0217 15:24:42.734560 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-dsfkk_c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda/openshift-controller-manager-operator/5.log" Feb 17 15:24:42.734783 master-0 kubenswrapper[26425]: I0217 15:24:42.734631 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk" event={"ID":"c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda","Type":"ContainerStarted","Data":"7326a7ddf889855e2d71a49bc47d1a3ca7e5418f8e2f2bbb0ff724edafea33ba"} Feb 17 15:24:44.008616 master-0 kubenswrapper[26425]: E0217 15:24:44.008507 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 17 15:24:44.247690 master-0 kubenswrapper[26425]: I0217 15:24:44.247566 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:44.247690 master-0 kubenswrapper[26425]: I0217 15:24:44.247672 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:44.489547 master-0 kubenswrapper[26425]: I0217 15:24:44.489425 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:24:45.395614 master-0 kubenswrapper[26425]: I0217 15:24:45.395518 26425 scope.go:117] "RemoveContainer" containerID="f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e" Feb 17 15:24:45.396420 master-0 kubenswrapper[26425]: E0217 15:24:45.396000 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-8qkdw_openshift-machine-api(7307f70e-ee5b-4f81-8155-718a02c9efe7)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" podUID="7307f70e-ee5b-4f81-8155-718a02c9efe7" Feb 17 15:24:45.788123 master-0 kubenswrapper[26425]: I0217 15:24:45.788018 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:24:46.395527 master-0 kubenswrapper[26425]: I0217 15:24:46.395423 26425 scope.go:117] "RemoveContainer" containerID="222f8b32a244117742dff2e2e86d105ffe016267bda9f4735e54e891abb8c398" Feb 17 15:24:46.772179 master-0 kubenswrapper[26425]: I0217 15:24:46.771985 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-5f5g9_af61bda0-c7b4-489d-a671-eaa5299942fe/openshift-apiserver-operator/4.log" Feb 17 15:24:46.772179 master-0 kubenswrapper[26425]: I0217 15:24:46.772108 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9" event={"ID":"af61bda0-c7b4-489d-a671-eaa5299942fe","Type":"ContainerStarted","Data":"237f3188016fee6fdc4fc0449b4e13ebe0f7287cd57f88532988c94139793ad9"} Feb 17 15:24:48.788304 master-0 kubenswrapper[26425]: I0217 15:24:48.787993 26425 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:48.788304 master-0 kubenswrapper[26425]: I0217 15:24:48.788155 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:49.395828 master-0 kubenswrapper[26425]: I0217 15:24:49.395753 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:24:49.396161 master-0 kubenswrapper[26425]: E0217 15:24:49.396104 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:24:53.396446 master-0 kubenswrapper[26425]: I0217 15:24:53.396031 26425 scope.go:117] "RemoveContainer" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" Feb 17 15:24:53.396446 master-0 kubenswrapper[26425]: E0217 15:24:53.396387 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" podUID="61d90bf3-02df-48c8-b2ec-09a1653b0800" Feb 17 15:24:54.247306 master-0 kubenswrapper[26425]: I0217 15:24:54.247200 26425 patch_prober.go:28] interesting pod/authentication-operator-755d954778-jrdqm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:24:54.247306 master-0 kubenswrapper[26425]: I0217 15:24:54.247300 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:24:54.247963 master-0 kubenswrapper[26425]: I0217 15:24:54.247362 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" Feb 17 15:24:54.248601 master-0 kubenswrapper[26425]: I0217 15:24:54.248067 26425 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"22539d2581158c802b8841cfdcf177e262bdfa4c577e4e31ddc8ccb2193f1a9b"} pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 17 15:24:54.248601 master-0 kubenswrapper[26425]: I0217 15:24:54.248130 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" podUID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerName="authentication-operator" containerID="cri-o://22539d2581158c802b8841cfdcf177e262bdfa4c577e4e31ddc8ccb2193f1a9b" gracePeriod=30 Feb 17 15:24:54.851257 master-0 kubenswrapper[26425]: I0217 15:24:54.851131 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/6.log" Feb 17 15:24:54.852876 master-0 kubenswrapper[26425]: I0217 15:24:54.852848 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/5.log" Feb 17 15:24:54.853108 master-0 kubenswrapper[26425]: I0217 15:24:54.853073 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9b3f722-fb34-4ff5-b28b-fc24f43d85ae" containerID="22539d2581158c802b8841cfdcf177e262bdfa4c577e4e31ddc8ccb2193f1a9b" exitCode=255 Feb 17 15:24:54.853252 master-0 kubenswrapper[26425]: I0217 15:24:54.853137 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerDied","Data":"22539d2581158c802b8841cfdcf177e262bdfa4c577e4e31ddc8ccb2193f1a9b"} Feb 17 15:24:54.853428 master-0 kubenswrapper[26425]: I0217 15:24:54.853398 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-jrdqm" event={"ID":"e9b3f722-fb34-4ff5-b28b-fc24f43d85ae","Type":"ContainerStarted","Data":"d77d6525b47bcce1c0870fd0a740f2b6143f8736a7c46ef99bc2425b077f3425"} Feb 17 15:24:54.853617 master-0 kubenswrapper[26425]: I0217 15:24:54.853498 26425 scope.go:117] "RemoveContainer" containerID="3dc490922f0075ca3c75faa53bceaced69cacacf6eec849a200da98a82628a1f" Feb 17 15:24:55.395967 master-0 kubenswrapper[26425]: I0217 15:24:55.395875 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:24:55.396664 master-0 kubenswrapper[26425]: E0217 15:24:55.396623 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-6978b88779-vp5tv_openshift-route-controller-manager(3db03cef-d297-4bf7-8e52-dd0b18882d07)\"" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" Feb 17 15:24:55.865819 master-0 kubenswrapper[26425]: I0217 15:24:55.865734 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-jrdqm_e9b3f722-fb34-4ff5-b28b-fc24f43d85ae/authentication-operator/6.log" Feb 17 15:24:56.954734 master-0 kubenswrapper[26425]: I0217 15:24:56.954678 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:24:56.963590 master-0 kubenswrapper[26425]: I0217 15:24:56.963521 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:25:00.395631 master-0 kubenswrapper[26425]: I0217 15:25:00.395543 26425 scope.go:117] "RemoveContainer" containerID="f78bccb9dbf10a63db28803749c39a2049c40f0571f92dbd73399bd4685d807e" Feb 17 15:25:00.907917 master-0 kubenswrapper[26425]: I0217 15:25:00.907817 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-8qkdw_7307f70e-ee5b-4f81-8155-718a02c9efe7/cluster-baremetal-operator/3.log" Feb 17 15:25:00.908600 master-0 kubenswrapper[26425]: I0217 15:25:00.908504 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw" event={"ID":"7307f70e-ee5b-4f81-8155-718a02c9efe7","Type":"ContainerStarted","Data":"bce67cd52b7327713795367e7bbbac72cbcc5eabaf18d4c6696e608a1504ae15"} Feb 17 15:25:03.396133 master-0 kubenswrapper[26425]: I0217 15:25:03.396054 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:25:03.397090 master-0 kubenswrapper[26425]: E0217 15:25:03.396553 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" podUID="129dba1e-73df-4ea4-96c0-3eba78d568ba" Feb 17 15:25:07.582310 master-0 kubenswrapper[26425]: E0217 15:25:07.582180 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:07.582310 master-0 kubenswrapper[26425]: E0217 15:25:07.582316 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:08.082286207 +0000 UTC m=+569.974010065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:08.092991 master-0 kubenswrapper[26425]: E0217 15:25:08.092881 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:08.092991 master-0 kubenswrapper[26425]: E0217 15:25:08.092985 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:09.092964626 +0000 UTC m=+570.984688454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:08.401320 master-0 kubenswrapper[26425]: I0217 15:25:08.401193 26425 scope.go:117] "RemoveContainer" containerID="1bab69104b790b35ac526ac9fe685337d8081ae7c98281de4ab5f43c49949c0f" Feb 17 15:25:08.983850 master-0 kubenswrapper[26425]: I0217 15:25:08.983758 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-fcnqs_61d90bf3-02df-48c8-b2ec-09a1653b0800/openshift-config-operator/5.log" Feb 17 15:25:08.984714 master-0 kubenswrapper[26425]: I0217 15:25:08.984555 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" event={"ID":"61d90bf3-02df-48c8-b2ec-09a1653b0800","Type":"ContainerStarted","Data":"f647914197c44c85df45ea9adb11eac6085a89027f3bf33259ba377c10dc06b0"} Feb 17 15:25:08.984945 master-0 kubenswrapper[26425]: I0217 15:25:08.984887 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:25:09.109428 master-0 kubenswrapper[26425]: E0217 15:25:09.109331 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:09.109734 master-0 kubenswrapper[26425]: E0217 15:25:09.109447 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:11.109424191 +0000 UTC m=+573.001148049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:10.395344 master-0 kubenswrapper[26425]: I0217 15:25:10.395270 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:25:11.005563 master-0 kubenswrapper[26425]: I0217 15:25:11.005513 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/6.log" Feb 17 15:25:11.005790 master-0 kubenswrapper[26425]: I0217 15:25:11.005577 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerStarted","Data":"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e"} Feb 17 15:25:11.006050 master-0 kubenswrapper[26425]: I0217 15:25:11.006005 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:25:11.140872 master-0 kubenswrapper[26425]: E0217 15:25:11.140794 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:11.140872 master-0 kubenswrapper[26425]: E0217 15:25:11.140879 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:15.140858795 +0000 UTC m=+577.032582613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:11.740634 master-0 kubenswrapper[26425]: I0217 15:25:11.740575 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:25:13.180039 master-0 kubenswrapper[26425]: I0217 15:25:13.179964 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs" Feb 17 15:25:14.250537 master-0 kubenswrapper[26425]: I0217 15:25:14.250485 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:25:14.251075 master-0 kubenswrapper[26425]: I0217 15:25:14.250686 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" podUID="7c393109-8c98-4a73-be1a-608038e5d094" containerName="metrics-server" containerID="cri-o://f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3" gracePeriod=170 Feb 17 15:25:15.203706 master-0 kubenswrapper[26425]: E0217 15:25:15.203660 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:15.204019 master-0 kubenswrapper[26425]: E0217 15:25:15.204002 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:23.203981259 +0000 UTC m=+585.095705087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:15.799001 master-0 kubenswrapper[26425]: I0217 15:25:15.798945 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:25:15.799776 master-0 kubenswrapper[26425]: I0217 15:25:15.799284 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="kube-rbac-proxy" containerID="cri-o://17ed6cea7264bf0a4aee500a4d88ade7ea2777ab27aa21f615eafa009fe91ae7" gracePeriod=30 Feb 17 15:25:15.800019 master-0 kubenswrapper[26425]: I0217 15:25:15.799942 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="reload" containerID="cri-o://814f18394f5d77f7d7fe55ef10f2d92ca387fc05357af1309dc48dc0fb7b66a7" gracePeriod=30 Feb 17 15:25:15.801014 master-0 kubenswrapper[26425]: I0217 15:25:15.800376 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="telemeter-client" containerID="cri-o://d8f5d8e5601a1e0de83d6a922182ed26b2fc744ebae08cdcc7739ae26257bd02" gracePeriod=30 Feb 17 15:25:16.063395 master-0 kubenswrapper[26425]: I0217 15:25:16.063302 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fbdcd9689-spqtt_8379aee6-f810-4e5f-b209-8f6cb5f87df0/telemeter-client/0.log" Feb 17 15:25:16.063580 master-0 kubenswrapper[26425]: I0217 15:25:16.063423 26425 generic.go:334] "Generic (PLEG): container finished" podID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerID="17ed6cea7264bf0a4aee500a4d88ade7ea2777ab27aa21f615eafa009fe91ae7" exitCode=0 Feb 17 15:25:16.063580 master-0 kubenswrapper[26425]: I0217 15:25:16.063454 26425 generic.go:334] "Generic (PLEG): container finished" podID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerID="814f18394f5d77f7d7fe55ef10f2d92ca387fc05357af1309dc48dc0fb7b66a7" exitCode=0 Feb 17 15:25:16.063580 master-0 kubenswrapper[26425]: I0217 15:25:16.063508 26425 generic.go:334] "Generic (PLEG): container finished" podID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerID="d8f5d8e5601a1e0de83d6a922182ed26b2fc744ebae08cdcc7739ae26257bd02" exitCode=2 Feb 17 15:25:16.063580 master-0 kubenswrapper[26425]: I0217 15:25:16.063527 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerDied","Data":"17ed6cea7264bf0a4aee500a4d88ade7ea2777ab27aa21f615eafa009fe91ae7"} Feb 17 15:25:16.063749 master-0 kubenswrapper[26425]: I0217 15:25:16.063623 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerDied","Data":"814f18394f5d77f7d7fe55ef10f2d92ca387fc05357af1309dc48dc0fb7b66a7"} Feb 17 15:25:16.063749 master-0 kubenswrapper[26425]: I0217 15:25:16.063663 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerDied","Data":"d8f5d8e5601a1e0de83d6a922182ed26b2fc744ebae08cdcc7739ae26257bd02"} Feb 17 15:25:16.397355 master-0 kubenswrapper[26425]: I0217 15:25:16.397271 26425 scope.go:117] "RemoveContainer" containerID="09f6d5652a91a659b206d9c9a0df8a6f56cc7bbaad4726c94fe735f863803c9f" Feb 17 15:25:16.418085 master-0 kubenswrapper[26425]: I0217 15:25:16.418016 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fbdcd9689-spqtt_8379aee6-f810-4e5f-b209-8f6cb5f87df0/telemeter-client/0.log" Feb 17 15:25:16.418399 master-0 kubenswrapper[26425]: I0217 15:25:16.418146 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:25:16.525038 master-0 kubenswrapper[26425]: I0217 15:25:16.524926 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.525038 master-0 kubenswrapper[26425]: I0217 15:25:16.525000 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.525430 master-0 kubenswrapper[26425]: I0217 15:25:16.525071 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.525430 master-0 kubenswrapper[26425]: I0217 15:25:16.525146 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.526133 master-0 kubenswrapper[26425]: I0217 15:25:16.525997 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.526133 master-0 kubenswrapper[26425]: I0217 15:25:16.526090 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:25:16.526133 master-0 kubenswrapper[26425]: I0217 15:25:16.526108 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:25:16.526438 master-0 kubenswrapper[26425]: I0217 15:25:16.526294 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.526651 master-0 kubenswrapper[26425]: I0217 15:25:16.526537 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.526758 master-0 kubenswrapper[26425]: I0217 15:25:16.526715 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\" (UID: \"8379aee6-f810-4e5f-b209-8f6cb5f87df0\") " Feb 17 15:25:16.527409 master-0 kubenswrapper[26425]: I0217 15:25:16.527357 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:25:16.528109 master-0 kubenswrapper[26425]: I0217 15:25:16.528064 26425 reconciler_common.go:293] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.528109 master-0 kubenswrapper[26425]: I0217 15:25:16.528106 26425 reconciler_common.go:293] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.528393 master-0 kubenswrapper[26425]: I0217 15:25:16.528130 26425 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8379aee6-f810-4e5f-b209-8f6cb5f87df0-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.529779 master-0 kubenswrapper[26425]: I0217 15:25:16.529713 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:25:16.530696 master-0 kubenswrapper[26425]: I0217 15:25:16.530646 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:25:16.534706 master-0 kubenswrapper[26425]: I0217 15:25:16.534618 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:25:16.534867 master-0 kubenswrapper[26425]: I0217 15:25:16.534790 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls" (OuterVolumeSpecName: "federate-client-tls") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "federate-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:25:16.535888 master-0 kubenswrapper[26425]: I0217 15:25:16.535797 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w" (OuterVolumeSpecName: "kube-api-access-sj92w") pod "8379aee6-f810-4e5f-b209-8f6cb5f87df0" (UID: "8379aee6-f810-4e5f-b209-8f6cb5f87df0"). InnerVolumeSpecName "kube-api-access-sj92w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:25:16.629875 master-0 kubenswrapper[26425]: I0217 15:25:16.629802 26425 reconciler_common.go:293] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-telemeter-client-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.629875 master-0 kubenswrapper[26425]: I0217 15:25:16.629853 26425 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.629875 master-0 kubenswrapper[26425]: I0217 15:25:16.629869 26425 reconciler_common.go:293] "Volume detached for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-federate-client-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.629875 master-0 kubenswrapper[26425]: I0217 15:25:16.629885 26425 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8379aee6-f810-4e5f-b209-8f6cb5f87df0-secret-telemeter-client-kube-rbac-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:16.630316 master-0 kubenswrapper[26425]: I0217 15:25:16.629903 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj92w\" (UniqueName: \"kubernetes.io/projected/8379aee6-f810-4e5f-b209-8f6cb5f87df0-kube-api-access-sj92w\") on node \"master-0\" DevicePath \"\"" Feb 17 15:25:17.076663 master-0 kubenswrapper[26425]: I0217 15:25:17.076576 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fbdcd9689-spqtt_8379aee6-f810-4e5f-b209-8f6cb5f87df0/telemeter-client/0.log" Feb 17 15:25:17.077701 master-0 kubenswrapper[26425]: I0217 15:25:17.076774 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" event={"ID":"8379aee6-f810-4e5f-b209-8f6cb5f87df0","Type":"ContainerDied","Data":"c73742e20a24cd489609b6484bb7dd86a6b3725d2919288b5ca15357b170f83e"} Feb 17 15:25:17.077701 master-0 kubenswrapper[26425]: I0217 15:25:17.076907 26425 scope.go:117] "RemoveContainer" containerID="17ed6cea7264bf0a4aee500a4d88ade7ea2777ab27aa21f615eafa009fe91ae7" Feb 17 15:25:17.077701 master-0 kubenswrapper[26425]: I0217 15:25:17.077282 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-spqtt" Feb 17 15:25:17.080949 master-0 kubenswrapper[26425]: I0217 15:25:17.080870 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-q4766_129dba1e-73df-4ea4-96c0-3eba78d568ba/snapshot-controller/7.log" Feb 17 15:25:17.080949 master-0 kubenswrapper[26425]: I0217 15:25:17.080933 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766" event={"ID":"129dba1e-73df-4ea4-96c0-3eba78d568ba","Type":"ContainerStarted","Data":"42db95544fd40cfdc967ab0aba66afd67efbd97fcb1cd50e3fa4d97ee78465f0"} Feb 17 15:25:17.100292 master-0 kubenswrapper[26425]: I0217 15:25:17.100219 26425 scope.go:117] "RemoveContainer" containerID="814f18394f5d77f7d7fe55ef10f2d92ca387fc05357af1309dc48dc0fb7b66a7" Feb 17 15:25:17.126512 master-0 kubenswrapper[26425]: I0217 15:25:17.126435 26425 scope.go:117] "RemoveContainer" containerID="d8f5d8e5601a1e0de83d6a922182ed26b2fc744ebae08cdcc7739ae26257bd02" Feb 17 15:25:17.164005 master-0 kubenswrapper[26425]: I0217 15:25:17.158851 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:25:17.168513 master-0 kubenswrapper[26425]: I0217 15:25:17.168431 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-spqtt"] Feb 17 15:25:18.407846 master-0 kubenswrapper[26425]: I0217 15:25:18.407758 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" path="/var/lib/kubelet/pods/8379aee6-f810-4e5f-b209-8f6cb5f87df0/volumes" Feb 17 15:25:23.239971 master-0 kubenswrapper[26425]: E0217 15:25:23.239881 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:23.241388 master-0 kubenswrapper[26425]: E0217 15:25:23.240029 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:25:39.239999163 +0000 UTC m=+601.131723021 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:39.295731 master-0 kubenswrapper[26425]: E0217 15:25:39.294449 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:25:39.296817 master-0 kubenswrapper[26425]: E0217 15:25:39.295750 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:26:11.295691542 +0000 UTC m=+633.187415370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:26:00.545610 master-0 kubenswrapper[26425]: I0217 15:26:00.545495 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:26:00.546932 master-0 kubenswrapper[26425]: E0217 15:26:00.545695 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:26:00.546932 master-0 kubenswrapper[26425]: E0217 15:26:00.545713 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:26:00.546932 master-0 kubenswrapper[26425]: E0217 15:26:00.545761 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:28:02.545745521 +0000 UTC m=+744.437469339 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:26:11.328293 master-0 kubenswrapper[26425]: E0217 15:26:11.328214 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:26:11.329773 master-0 kubenswrapper[26425]: E0217 15:26:11.328341 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:15.32831204 +0000 UTC m=+697.220035898 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:27:15.716694 master-0 kubenswrapper[26425]: E0217 15:27:15.389776 26425 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-aaauri1gstf68: secret "metrics-server-aaauri1gstf68" not found Feb 17 15:27:15.716694 master-0 kubenswrapper[26425]: E0217 15:27:15.389971 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle podName:7c393109-8c98-4a73-be1a-608038e5d094 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:17.389936013 +0000 UTC m=+819.281659861 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle") pod "metrics-server-f94977f65-sgf5z" (UID: "7c393109-8c98-4a73-be1a-608038e5d094") : secret "metrics-server-aaauri1gstf68" not found Feb 17 15:27:28.241162 master-0 kubenswrapper[26425]: I0217 15:27:28.241068 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-75c4d5b7f-t6zcq"] Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: E0217 15:27:28.241877 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e43034-56d0-4fb2-8886-deb00b625686" containerName="installer" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: I0217 15:27:28.241903 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e43034-56d0-4fb2-8886-deb00b625686" containerName="installer" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: E0217 15:27:28.241963 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="kube-rbac-proxy" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: I0217 15:27:28.241979 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="kube-rbac-proxy" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: E0217 15:27:28.242013 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="reload" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: I0217 15:27:28.242064 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="reload" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: E0217 15:27:28.242095 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="telemeter-client" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: I0217 15:27:28.242108 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="telemeter-client" Feb 17 15:27:28.242213 master-0 kubenswrapper[26425]: E0217 15:27:28.242187 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b6a099-f52a-428a-af09-d1842ce66891" containerName="installer" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242201 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b6a099-f52a-428a-af09-d1842ce66891" containerName="installer" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242697 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="reload" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242778 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="kube-rbac-proxy" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242870 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b6a099-f52a-428a-af09-d1842ce66891" containerName="installer" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242924 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8379aee6-f810-4e5f-b209-8f6cb5f87df0" containerName="telemeter-client" Feb 17 15:27:28.243014 master-0 kubenswrapper[26425]: I0217 15:27:28.242946 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e43034-56d0-4fb2-8886-deb00b625686" containerName="installer" Feb 17 15:27:28.248639 master-0 kubenswrapper[26425]: I0217 15:27:28.244301 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.248639 master-0 kubenswrapper[26425]: I0217 15:27:28.247649 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-85c85bc675-62rqj"] Feb 17 15:27:28.262679 master-0 kubenswrapper[26425]: I0217 15:27:28.262086 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-flbia8i8i4eih" Feb 17 15:27:28.298718 master-0 kubenswrapper[26425]: I0217 15:27:28.286433 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:27:28.298718 master-0 kubenswrapper[26425]: I0217 15:27:28.288883 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.301701 master-0 kubenswrapper[26425]: I0217 15:27:28.301618 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-tkkqz" Feb 17 15:27:28.302247 master-0 kubenswrapper[26425]: I0217 15:27:28.302184 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 15:27:28.302400 master-0 kubenswrapper[26425]: I0217 15:27:28.302248 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 15:27:28.303580 master-0 kubenswrapper[26425]: I0217 15:27:28.303450 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-eu11557dmf9qt" Feb 17 15:27:28.303747 master-0 kubenswrapper[26425]: I0217 15:27:28.303552 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 15:27:28.304115 master-0 kubenswrapper[26425]: I0217 15:27:28.304046 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 15:27:28.306051 master-0 kubenswrapper[26425]: I0217 15:27:28.305974 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 15:27:28.307319 master-0 kubenswrapper[26425]: I0217 15:27:28.307234 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:27:28.307580 master-0 kubenswrapper[26425]: I0217 15:27:28.307523 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.311327 master-0 kubenswrapper[26425]: I0217 15:27:28.311256 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 15:27:28.311616 master-0 kubenswrapper[26425]: I0217 15:27:28.311373 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 15:27:28.311616 master-0 kubenswrapper[26425]: I0217 15:27:28.311427 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 15:27:28.311616 master-0 kubenswrapper[26425]: I0217 15:27:28.311262 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2tsl8" Feb 17 15:27:28.311944 master-0 kubenswrapper[26425]: I0217 15:27:28.311782 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7d1hat1ob2dke" Feb 17 15:27:28.311944 master-0 kubenswrapper[26425]: I0217 15:27:28.311806 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 15:27:28.311944 master-0 kubenswrapper[26425]: I0217 15:27:28.311848 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 15:27:28.312399 master-0 kubenswrapper[26425]: I0217 15:27:28.311936 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-75c4d5b7f-t6zcq"] Feb 17 15:27:28.312399 master-0 kubenswrapper[26425]: I0217 15:27:28.312039 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.312838 master-0 kubenswrapper[26425]: I0217 15:27:28.312579 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 15:27:28.314992 master-0 kubenswrapper[26425]: I0217 15:27:28.314936 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 15:27:28.315291 master-0 kubenswrapper[26425]: I0217 15:27:28.315242 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 15:27:28.315606 master-0 kubenswrapper[26425]: I0217 15:27:28.315564 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 15:27:28.315848 master-0 kubenswrapper[26425]: I0217 15:27:28.315776 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pv4xc" Feb 17 15:27:28.315848 master-0 kubenswrapper[26425]: I0217 15:27:28.315822 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 15:27:28.317014 master-0 kubenswrapper[26425]: I0217 15:27:28.316140 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 15:27:28.317014 master-0 kubenswrapper[26425]: I0217 15:27:28.316542 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 15:27:28.317014 master-0 kubenswrapper[26425]: I0217 15:27:28.316577 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 15:27:28.318519 master-0 kubenswrapper[26425]: I0217 15:27:28.318390 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 15:27:28.318927 master-0 kubenswrapper[26425]: I0217 15:27:28.318874 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 15:27:28.322920 master-0 kubenswrapper[26425]: I0217 15:27:28.322883 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 15:27:28.323174 master-0 kubenswrapper[26425]: I0217 15:27:28.323140 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 15:27:28.323174 master-0 kubenswrapper[26425]: I0217 15:27:28.323153 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 15:27:28.324040 master-0 kubenswrapper[26425]: I0217 15:27:28.323966 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 15:27:28.329677 master-0 kubenswrapper[26425]: I0217 15:27:28.327411 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85c85bc675-62rqj"] Feb 17 15:27:28.337653 master-0 kubenswrapper[26425]: I0217 15:27:28.336358 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:27:28.356063 master-0 kubenswrapper[26425]: I0217 15:27:28.355990 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:27:28.394519 master-0 kubenswrapper[26425]: I0217 15:27:28.394424 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-client-certs\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.394795 master-0 kubenswrapper[26425]: I0217 15:27:28.394528 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-server-tls\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.394795 master-0 kubenswrapper[26425]: I0217 15:27:28.394572 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-audit-log\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.394795 master-0 kubenswrapper[26425]: I0217 15:27:28.394606 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-client-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.394795 master-0 kubenswrapper[26425]: I0217 15:27:28.394678 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-grpc-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.394795 master-0 kubenswrapper[26425]: I0217 15:27:28.394707 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2x79\" (UniqueName: \"kubernetes.io/projected/9d69a8cd-b0f9-4651-93a3-3226643fc380-kube-api-access-m2x79\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.394967 master-0 kubenswrapper[26425]: I0217 15:27:28.394847 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.394967 master-0 kubenswrapper[26425]: I0217 15:27:28.394903 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.395153 master-0 kubenswrapper[26425]: I0217 15:27:28.395050 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kr66\" (UniqueName: \"kubernetes.io/projected/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-kube-api-access-8kr66\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.395221 master-0 kubenswrapper[26425]: I0217 15:27:28.395199 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.395273 master-0 kubenswrapper[26425]: I0217 15:27:28.395249 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.395419 master-0 kubenswrapper[26425]: I0217 15:27:28.395381 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d69a8cd-b0f9-4651-93a3-3226643fc380-metrics-client-ca\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.395530 master-0 kubenswrapper[26425]: I0217 15:27:28.395493 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.395567 master-0 kubenswrapper[26425]: I0217 15:27:28.395546 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-metrics-server-audit-profiles\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.395636 master-0 kubenswrapper[26425]: I0217 15:27:28.395609 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.496636 master-0 kubenswrapper[26425]: I0217 15:27:28.496471 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-grpc-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.496636 master-0 kubenswrapper[26425]: I0217 15:27:28.496518 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2x79\" (UniqueName: \"kubernetes.io/projected/9d69a8cd-b0f9-4651-93a3-3226643fc380-kube-api-access-m2x79\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.496636 master-0 kubenswrapper[26425]: I0217 15:27:28.496545 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.496636 master-0 kubenswrapper[26425]: I0217 15:27:28.496560 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8hc\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.496636 master-0 kubenswrapper[26425]: I0217 15:27:28.496582 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497182 master-0 kubenswrapper[26425]: I0217 15:27:28.496922 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.497182 master-0 kubenswrapper[26425]: I0217 15:27:28.496941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.497182 master-0 kubenswrapper[26425]: I0217 15:27:28.496958 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497182 master-0 kubenswrapper[26425]: I0217 15:27:28.497185 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.497435 master-0 kubenswrapper[26425]: I0217 15:27:28.497204 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497435 master-0 kubenswrapper[26425]: I0217 15:27:28.497225 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497435 master-0 kubenswrapper[26425]: I0217 15:27:28.497289 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.497435 master-0 kubenswrapper[26425]: I0217 15:27:28.497370 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.497435 master-0 kubenswrapper[26425]: I0217 15:27:28.497412 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kr66\" (UniqueName: \"kubernetes.io/projected/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-kube-api-access-8kr66\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497488 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497542 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497592 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497589 26425 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497643 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.497818 master-0 kubenswrapper[26425]: I0217 15:27:28.497680 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.498316 master-0 kubenswrapper[26425]: I0217 15:27:28.498263 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.498396 master-0 kubenswrapper[26425]: I0217 15:27:28.498341 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.498736 master-0 kubenswrapper[26425]: I0217 15:27:28.498396 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d69a8cd-b0f9-4651-93a3-3226643fc380-metrics-client-ca\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.498736 master-0 kubenswrapper[26425]: I0217 15:27:28.498420 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.498736 master-0 kubenswrapper[26425]: I0217 15:27:28.498482 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.498736 master-0 kubenswrapper[26425]: I0217 15:27:28.498507 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.498736 master-0 kubenswrapper[26425]: I0217 15:27:28.498686 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-metrics-server-audit-profiles\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.499209 master-0 kubenswrapper[26425]: I0217 15:27:28.498780 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499209 master-0 kubenswrapper[26425]: I0217 15:27:28.499012 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vsgn\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499209 master-0 kubenswrapper[26425]: I0217 15:27:28.499051 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499209 master-0 kubenswrapper[26425]: I0217 15:27:28.499176 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499230 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499323 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499368 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499391 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499414 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499450 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499598 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-server-tls\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499634 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-client-certs\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499660 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-audit-log\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.499669 master-0 kubenswrapper[26425]: I0217 15:27:28.499692 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-client-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.499813 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.499850 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.500095 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.500168 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.500233 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.500853 master-0 kubenswrapper[26425]: I0217 15:27:28.500445 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-audit-log\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.501390 master-0 kubenswrapper[26425]: I0217 15:27:28.501109 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.502368 master-0 kubenswrapper[26425]: I0217 15:27:28.502297 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d69a8cd-b0f9-4651-93a3-3226643fc380-metrics-client-ca\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.502929 master-0 kubenswrapper[26425]: I0217 15:27:28.502863 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.503103 master-0 kubenswrapper[26425]: I0217 15:27:28.502934 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-metrics-server-audit-profiles\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.504748 master-0 kubenswrapper[26425]: I0217 15:27:28.504690 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-grpc-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.506806 master-0 kubenswrapper[26425]: I0217 15:27:28.506770 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-server-tls\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.507206 master-0 kubenswrapper[26425]: I0217 15:27:28.507159 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.508048 master-0 kubenswrapper[26425]: I0217 15:27:28.507991 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-client-ca-bundle\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.508184 master-0 kubenswrapper[26425]: I0217 15:27:28.508159 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.508791 master-0 kubenswrapper[26425]: I0217 15:27:28.508720 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-secret-metrics-client-certs\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.514099 master-0 kubenswrapper[26425]: I0217 15:27:28.514038 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2x79\" (UniqueName: \"kubernetes.io/projected/9d69a8cd-b0f9-4651-93a3-3226643fc380-kube-api-access-m2x79\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.515730 master-0 kubenswrapper[26425]: I0217 15:27:28.515665 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.521068 master-0 kubenswrapper[26425]: I0217 15:27:28.521021 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kr66\" (UniqueName: \"kubernetes.io/projected/ff162838-4c18-4f41-b8fe-d9f0c55b1d2a-kube-api-access-8kr66\") pod \"metrics-server-75c4d5b7f-t6zcq\" (UID: \"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a\") " pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.534350 master-0 kubenswrapper[26425]: I0217 15:27:28.534198 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/9d69a8cd-b0f9-4651-93a3-3226643fc380-secret-thanos-querier-tls\") pod \"thanos-querier-85c85bc675-62rqj\" (UID: \"9d69a8cd-b0f9-4651-93a3-3226643fc380\") " pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:28.602323 master-0 kubenswrapper[26425]: I0217 15:27:28.602254 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8hc\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.602579 master-0 kubenswrapper[26425]: I0217 15:27:28.602329 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.602662 master-0 kubenswrapper[26425]: I0217 15:27:28.602602 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.602731 master-0 kubenswrapper[26425]: I0217 15:27:28.602686 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:28.602783 master-0 kubenswrapper[26425]: I0217 15:27:28.602729 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.602783 master-0 kubenswrapper[26425]: I0217 15:27:28.602773 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.602866 master-0 kubenswrapper[26425]: I0217 15:27:28.602810 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.602866 master-0 kubenswrapper[26425]: I0217 15:27:28.602849 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: E0217 15:27:28.603008 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:29.102983541 +0000 UTC m=+710.994707389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: E0217 15:27:28.603268 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:29.103247537 +0000 UTC m=+710.994971355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: I0217 15:27:28.603320 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: I0217 15:27:28.603371 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: I0217 15:27:28.603496 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.603579 master-0 kubenswrapper[26425]: I0217 15:27:28.603531 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.604048 master-0 kubenswrapper[26425]: I0217 15:27:28.603740 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.604048 master-0 kubenswrapper[26425]: I0217 15:27:28.603824 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.604048 master-0 kubenswrapper[26425]: I0217 15:27:28.603879 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.604294 master-0 kubenswrapper[26425]: E0217 15:27:28.604089 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:28.604294 master-0 kubenswrapper[26425]: E0217 15:27:28.604225 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:29.104193881 +0000 UTC m=+710.995917739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:28.604294 master-0 kubenswrapper[26425]: I0217 15:27:28.604229 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604292 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604349 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604383 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604488 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vsgn\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604559 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604610 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604678 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604730 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604787 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604800 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604845 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.604901 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.605207 master-0 kubenswrapper[26425]: I0217 15:27:28.605133 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.606740 master-0 kubenswrapper[26425]: I0217 15:27:28.605255 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.606740 master-0 kubenswrapper[26425]: I0217 15:27:28.605372 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.606740 master-0 kubenswrapper[26425]: I0217 15:27:28.605528 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.606740 master-0 kubenswrapper[26425]: I0217 15:27:28.605550 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.606740 master-0 kubenswrapper[26425]: I0217 15:27:28.605745 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.607744 master-0 kubenswrapper[26425]: I0217 15:27:28.607692 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.607876 master-0 kubenswrapper[26425]: I0217 15:27:28.605613 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.610139 master-0 kubenswrapper[26425]: I0217 15:27:28.610090 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.610293 master-0 kubenswrapper[26425]: I0217 15:27:28.610226 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.610840 master-0 kubenswrapper[26425]: I0217 15:27:28.610764 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.610840 master-0 kubenswrapper[26425]: I0217 15:27:28.610816 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.611134 master-0 kubenswrapper[26425]: E0217 15:27:28.611078 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:28.611209 master-0 kubenswrapper[26425]: E0217 15:27:28.611191 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:29.11115716 +0000 UTC m=+711.002881008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:28.611286 master-0 kubenswrapper[26425]: E0217 15:27:28.611264 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:28.611346 master-0 kubenswrapper[26425]: I0217 15:27:28.611265 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.611346 master-0 kubenswrapper[26425]: E0217 15:27:28.611328 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:29.111306974 +0000 UTC m=+711.003030842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:28.612132 master-0 kubenswrapper[26425]: I0217 15:27:28.612071 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.614392 master-0 kubenswrapper[26425]: I0217 15:27:28.614345 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.614835 master-0 kubenswrapper[26425]: I0217 15:27:28.614778 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.615528 master-0 kubenswrapper[26425]: I0217 15:27:28.615469 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.618213 master-0 kubenswrapper[26425]: I0217 15:27:28.618154 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.620812 master-0 kubenswrapper[26425]: I0217 15:27:28.620740 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.621558 master-0 kubenswrapper[26425]: I0217 15:27:28.621502 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.621743 master-0 kubenswrapper[26425]: I0217 15:27:28.621685 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.624218 master-0 kubenswrapper[26425]: I0217 15:27:28.624163 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.624627 master-0 kubenswrapper[26425]: I0217 15:27:28.624545 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.625784 master-0 kubenswrapper[26425]: I0217 15:27:28.625714 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.627239 master-0 kubenswrapper[26425]: I0217 15:27:28.627181 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.636064 master-0 kubenswrapper[26425]: I0217 15:27:28.635996 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.638934 master-0 kubenswrapper[26425]: I0217 15:27:28.638878 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8hc\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:28.642889 master-0 kubenswrapper[26425]: I0217 15:27:28.642832 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vsgn\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:28.645393 master-0 kubenswrapper[26425]: I0217 15:27:28.645348 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:29.034535 master-0 kubenswrapper[26425]: I0217 15:27:29.034268 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85c85bc675-62rqj"] Feb 17 15:27:29.043596 master-0 kubenswrapper[26425]: W0217 15:27:29.043530 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d69a8cd_b0f9_4651_93a3_3226643fc380.slice/crio-aea87709212e048d48f0755ea696e188f6807f01772e8dba56a2cafd1070c8de WatchSource:0}: Error finding container aea87709212e048d48f0755ea696e188f6807f01772e8dba56a2cafd1070c8de: Status 404 returned error can't find the container with id aea87709212e048d48f0755ea696e188f6807f01772e8dba56a2cafd1070c8de Feb 17 15:27:29.047880 master-0 kubenswrapper[26425]: I0217 15:27:29.047856 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:27:29.113348 master-0 kubenswrapper[26425]: I0217 15:27:29.113244 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:29.113641 master-0 kubenswrapper[26425]: I0217 15:27:29.113393 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:29.113641 master-0 kubenswrapper[26425]: E0217 15:27:29.113538 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:29.113641 master-0 kubenswrapper[26425]: E0217 15:27:29.113575 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:29.113641 master-0 kubenswrapper[26425]: E0217 15:27:29.113627 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:30.113604617 +0000 UTC m=+712.005328445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:29.113641 master-0 kubenswrapper[26425]: I0217 15:27:29.113608 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:29.114084 master-0 kubenswrapper[26425]: E0217 15:27:29.113673 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:30.113639098 +0000 UTC m=+712.005363016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:29.114084 master-0 kubenswrapper[26425]: I0217 15:27:29.113736 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:29.114084 master-0 kubenswrapper[26425]: E0217 15:27:29.113936 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:30.113894754 +0000 UTC m=+712.005618612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:29.114084 master-0 kubenswrapper[26425]: I0217 15:27:29.114025 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:29.114373 master-0 kubenswrapper[26425]: E0217 15:27:29.114115 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:29.114373 master-0 kubenswrapper[26425]: E0217 15:27:29.114113 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:30.114064158 +0000 UTC m=+712.005788026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:29.114373 master-0 kubenswrapper[26425]: E0217 15:27:29.114202 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:30.114180821 +0000 UTC m=+712.005904769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:29.174511 master-0 kubenswrapper[26425]: I0217 15:27:29.174291 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-75c4d5b7f-t6zcq"] Feb 17 15:27:29.184240 master-0 kubenswrapper[26425]: W0217 15:27:29.184167 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff162838_4c18_4f41_b8fe_d9f0c55b1d2a.slice/crio-b5aca0daca4ce832d7fd7743318ab94bd7d3a7abbb9c8ceac81798fa3cd4afd5 WatchSource:0}: Error finding container b5aca0daca4ce832d7fd7743318ab94bd7d3a7abbb9c8ceac81798fa3cd4afd5: Status 404 returned error can't find the container with id b5aca0daca4ce832d7fd7743318ab94bd7d3a7abbb9c8ceac81798fa3cd4afd5 Feb 17 15:27:29.272517 master-0 kubenswrapper[26425]: I0217 15:27:29.272406 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"aea87709212e048d48f0755ea696e188f6807f01772e8dba56a2cafd1070c8de"} Feb 17 15:27:29.274091 master-0 kubenswrapper[26425]: I0217 15:27:29.274014 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" event={"ID":"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a","Type":"ContainerStarted","Data":"b5aca0daca4ce832d7fd7743318ab94bd7d3a7abbb9c8ceac81798fa3cd4afd5"} Feb 17 15:27:30.129008 master-0 kubenswrapper[26425]: I0217 15:27:30.128927 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:30.129008 master-0 kubenswrapper[26425]: I0217 15:27:30.128983 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:30.129494 master-0 kubenswrapper[26425]: I0217 15:27:30.129230 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:30.129494 master-0 kubenswrapper[26425]: E0217 15:27:30.129265 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:32.129234931 +0000 UTC m=+714.020958789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:30.129494 master-0 kubenswrapper[26425]: E0217 15:27:30.129337 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:32.129327323 +0000 UTC m=+714.021051141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:30.129494 master-0 kubenswrapper[26425]: I0217 15:27:30.129389 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:30.129494 master-0 kubenswrapper[26425]: I0217 15:27:30.129439 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129553 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129582 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:32.129573139 +0000 UTC m=+714.021296957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129634 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129670 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129734 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:32.129720353 +0000 UTC m=+714.021444171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:30.129864 master-0 kubenswrapper[26425]: E0217 15:27:30.129749 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:32.129743073 +0000 UTC m=+714.021466891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:30.287531 master-0 kubenswrapper[26425]: I0217 15:27:30.287380 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" event={"ID":"ff162838-4c18-4f41-b8fe-d9f0c55b1d2a","Type":"ContainerStarted","Data":"cbe53c6b91d7d2146bda87fbb92e15932336ca8b1500cca8b9faee6a0e40b04f"} Feb 17 15:27:30.321040 master-0 kubenswrapper[26425]: I0217 15:27:30.320921 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" podStartSLOduration=136.320894919 podStartE2EDuration="2m16.320894919s" podCreationTimestamp="2026-02-17 15:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:27:30.318362277 +0000 UTC m=+712.210086105" watchObservedRunningTime="2026-02-17 15:27:30.320894919 +0000 UTC m=+712.212618767" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: I0217 15:27:32.162933 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: I0217 15:27:32.163032 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: I0217 15:27:32.163063 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: I0217 15:27:32.163111 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: I0217 15:27:32.163137 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.163349 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:36.16332462 +0000 UTC m=+718.055048438 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.163937 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.163978 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:36.163970555 +0000 UTC m=+718.055694363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.164020 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.164040 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:36.164033407 +0000 UTC m=+718.055757225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.164081 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.164102 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:36.164096688 +0000 UTC m=+718.055820506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:32.165482 master-0 kubenswrapper[26425]: E0217 15:27:32.164145 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:36.164137179 +0000 UTC m=+718.055860997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:32.305581 master-0 kubenswrapper[26425]: I0217 15:27:32.305441 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"75dc8bfddb6a4d115bb0b33f4e5d15eb045df25da464e70c70a343ddf656488f"} Feb 17 15:27:32.305581 master-0 kubenswrapper[26425]: I0217 15:27:32.305529 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"e69552667e175ce40681707aef1a157f1fdb4a704169f653452f59026602e2ff"} Feb 17 15:27:32.305581 master-0 kubenswrapper[26425]: I0217 15:27:32.305563 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"d6108062a0aa156d3889bb3e26e6d97cf6346d789de75f464809982c7c1514d6"} Feb 17 15:27:34.326996 master-0 kubenswrapper[26425]: I0217 15:27:34.326911 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"9c26feea1271a2f712d292dda3916ad8f08ecd5eaec79bb9a4a0e4b11aaf2c4b"} Feb 17 15:27:34.326996 master-0 kubenswrapper[26425]: I0217 15:27:34.326991 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"ce9647e5b763163c47cfbfa42fb469d4597155b389240da5db7e33099dde2344"} Feb 17 15:27:34.327727 master-0 kubenswrapper[26425]: I0217 15:27:34.327012 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" event={"ID":"9d69a8cd-b0f9-4651-93a3-3226643fc380","Type":"ContainerStarted","Data":"83783bbf9d921ded25a277d3b095a85a95a0545e5aa8d869906326cc98f9945f"} Feb 17 15:27:34.327727 master-0 kubenswrapper[26425]: I0217 15:27:34.327200 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:34.403860 master-0 kubenswrapper[26425]: I0217 15:27:34.403722 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" podStartSLOduration=136.318957022 podStartE2EDuration="2m20.403692031s" podCreationTimestamp="2026-02-17 15:25:14 +0000 UTC" firstStartedPulling="2026-02-17 15:27:29.047804455 +0000 UTC m=+710.939528283" lastFinishedPulling="2026-02-17 15:27:33.132539474 +0000 UTC m=+715.024263292" observedRunningTime="2026-02-17 15:27:34.402115893 +0000 UTC m=+716.293839791" watchObservedRunningTime="2026-02-17 15:27:34.403692031 +0000 UTC m=+716.295415879" Feb 17 15:27:36.241519 master-0 kubenswrapper[26425]: I0217 15:27:36.241418 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:36.241519 master-0 kubenswrapper[26425]: I0217 15:27:36.241513 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: I0217 15:27:36.241565 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: I0217 15:27:36.241612 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.241802 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:44.241766906 +0000 UTC m=+726.133490754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.241871 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.241928 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:44.241912819 +0000 UTC m=+726.133636637 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.241956 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.241992 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:27:44.241981701 +0000 UTC m=+726.133705529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.242071 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:44.242062143 +0000 UTC m=+726.133785961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: I0217 15:27:36.242101 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.242198 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:36.242685 master-0 kubenswrapper[26425]: E0217 15:27:36.242228 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:27:44.242218867 +0000 UTC m=+726.133942695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:38.655998 master-0 kubenswrapper[26425]: I0217 15:27:38.655876 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-85c85bc675-62rqj" Feb 17 15:27:44.280171 master-0 kubenswrapper[26425]: I0217 15:27:44.280033 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: E0217 15:27:44.280212 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: I0217 15:27:44.280296 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: E0217 15:27:44.280501 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:00.28047088 +0000 UTC m=+742.172194708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: E0217 15:27:44.280517 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: E0217 15:27:44.280642 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:28:00.280610503 +0000 UTC m=+742.172334361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: E0217 15:27:44.280981 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:00.280949261 +0000 UTC m=+742.172673139 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: I0217 15:27:44.280580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: I0217 15:27:44.281221 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:27:44.281361 master-0 kubenswrapper[26425]: I0217 15:27:44.281297 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:27:44.282600 master-0 kubenswrapper[26425]: E0217 15:27:44.281407 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:44.282600 master-0 kubenswrapper[26425]: E0217 15:27:44.281481 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:00.281446114 +0000 UTC m=+742.173170032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:27:44.282600 master-0 kubenswrapper[26425]: E0217 15:27:44.281633 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:28:00.281604957 +0000 UTC m=+742.173328815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:27:45.296866 master-0 kubenswrapper[26425]: I0217 15:27:45.296794 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:27:45.405704 master-0 kubenswrapper[26425]: I0217 15:27:45.405597 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.405704 master-0 kubenswrapper[26425]: I0217 15:27:45.405668 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.405704 master-0 kubenswrapper[26425]: I0217 15:27:45.405710 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.406228 master-0 kubenswrapper[26425]: I0217 15:27:45.405741 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.406228 master-0 kubenswrapper[26425]: I0217 15:27:45.405779 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.406228 master-0 kubenswrapper[26425]: I0217 15:27:45.405831 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.406228 master-0 kubenswrapper[26425]: I0217 15:27:45.405865 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") pod \"7c393109-8c98-4a73-be1a-608038e5d094\" (UID: \"7c393109-8c98-4a73-be1a-608038e5d094\") " Feb 17 15:27:45.407230 master-0 kubenswrapper[26425]: I0217 15:27:45.407157 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:27:45.407384 master-0 kubenswrapper[26425]: I0217 15:27:45.407286 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log" (OuterVolumeSpecName: "audit-log") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:27:45.408022 master-0 kubenswrapper[26425]: I0217 15:27:45.407952 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:27:45.408691 master-0 kubenswrapper[26425]: I0217 15:27:45.408627 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:27:45.409671 master-0 kubenswrapper[26425]: I0217 15:27:45.409589 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:27:45.410050 master-0 kubenswrapper[26425]: I0217 15:27:45.409978 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:27:45.411424 master-0 kubenswrapper[26425]: I0217 15:27:45.411382 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt" (OuterVolumeSpecName: "kube-api-access-f54vt") pod "7c393109-8c98-4a73-be1a-608038e5d094" (UID: "7c393109-8c98-4a73-be1a-608038e5d094"). InnerVolumeSpecName "kube-api-access-f54vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:27:45.426873 master-0 kubenswrapper[26425]: I0217 15:27:45.426807 26425 generic.go:334] "Generic (PLEG): container finished" podID="7c393109-8c98-4a73-be1a-608038e5d094" containerID="f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3" exitCode=0 Feb 17 15:27:45.426873 master-0 kubenswrapper[26425]: I0217 15:27:45.426848 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" event={"ID":"7c393109-8c98-4a73-be1a-608038e5d094","Type":"ContainerDied","Data":"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3"} Feb 17 15:27:45.426873 master-0 kubenswrapper[26425]: I0217 15:27:45.426875 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" event={"ID":"7c393109-8c98-4a73-be1a-608038e5d094","Type":"ContainerDied","Data":"80a35c92c437f32b29f410d19a1ce0763e9f007a6c4df0b00fdf0704012a2c09"} Feb 17 15:27:45.426873 master-0 kubenswrapper[26425]: I0217 15:27:45.426893 26425 scope.go:117] "RemoveContainer" containerID="f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3" Feb 17 15:27:45.427365 master-0 kubenswrapper[26425]: I0217 15:27:45.426905 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f94977f65-sgf5z" Feb 17 15:27:45.482667 master-0 kubenswrapper[26425]: I0217 15:27:45.482579 26425 scope.go:117] "RemoveContainer" containerID="f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3" Feb 17 15:27:45.483089 master-0 kubenswrapper[26425]: E0217 15:27:45.483047 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3\": container with ID starting with f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3 not found: ID does not exist" containerID="f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3" Feb 17 15:27:45.483169 master-0 kubenswrapper[26425]: I0217 15:27:45.483097 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3"} err="failed to get container status \"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3\": rpc error: code = NotFound desc = could not find container \"f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3\": container with ID starting with f7aa6f291b153a21a0df697b856ff7c2ab858d591159344f0d74c325321910e3 not found: ID does not exist" Feb 17 15:27:45.505890 master-0 kubenswrapper[26425]: I0217 15:27:45.505737 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:27:45.508289 master-0 kubenswrapper[26425]: I0217 15:27:45.508228 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f54vt\" (UniqueName: \"kubernetes.io/projected/7c393109-8c98-4a73-be1a-608038e5d094-kube-api-access-f54vt\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508289 master-0 kubenswrapper[26425]: I0217 15:27:45.508275 26425 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508289 master-0 kubenswrapper[26425]: I0217 15:27:45.508286 26425 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c393109-8c98-4a73-be1a-608038e5d094-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508483 master-0 kubenswrapper[26425]: I0217 15:27:45.508299 26425 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508483 master-0 kubenswrapper[26425]: I0217 15:27:45.508311 26425 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c393109-8c98-4a73-be1a-608038e5d094-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508483 master-0 kubenswrapper[26425]: I0217 15:27:45.508320 26425 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.508483 master-0 kubenswrapper[26425]: I0217 15:27:45.508331 26425 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c393109-8c98-4a73-be1a-608038e5d094-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:27:45.509872 master-0 kubenswrapper[26425]: I0217 15:27:45.509834 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-f94977f65-sgf5z"] Feb 17 15:27:46.407005 master-0 kubenswrapper[26425]: I0217 15:27:46.406954 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c393109-8c98-4a73-be1a-608038e5d094" path="/var/lib/kubelet/pods/7c393109-8c98-4a73-be1a-608038e5d094/volumes" Feb 17 15:27:48.604006 master-0 kubenswrapper[26425]: I0217 15:27:48.602919 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:27:48.604006 master-0 kubenswrapper[26425]: I0217 15:27:48.603023 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:28:00.358873 master-0 kubenswrapper[26425]: I0217 15:28:00.358686 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:28:00.358873 master-0 kubenswrapper[26425]: I0217 15:28:00.358835 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:00.358873 master-0 kubenswrapper[26425]: I0217 15:28:00.358875 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.358910 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: I0217 15:28:00.358976 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359044 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:28:32.359011576 +0000 UTC m=+774.250735434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359076 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:32.359062458 +0000 UTC m=+774.250786316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: I0217 15:28:00.359677 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359771 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359827 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:28:32.359794596 +0000 UTC m=+774.251518424 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359837 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359868 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:32.359860337 +0000 UTC m=+774.251584155 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:28:00.360249 master-0 kubenswrapper[26425]: E0217 15:28:00.359987 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:28:32.359948179 +0000 UTC m=+774.251672037 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:28:02.593992 master-0 kubenswrapper[26425]: I0217 15:28:02.593898 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:28:02.594904 master-0 kubenswrapper[26425]: E0217 15:28:02.594250 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:28:02.594904 master-0 kubenswrapper[26425]: E0217 15:28:02.594334 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:28:02.594904 master-0 kubenswrapper[26425]: E0217 15:28:02.594478 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:30:04.594416658 +0000 UTC m=+866.486140486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:28:08.616018 master-0 kubenswrapper[26425]: I0217 15:28:08.615933 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:28:08.649916 master-0 kubenswrapper[26425]: I0217 15:28:08.649831 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-75c4d5b7f-t6zcq" Feb 17 15:28:32.457995 master-0 kubenswrapper[26425]: I0217 15:28:32.457894 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: I0217 15:28:32.458055 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: I0217 15:28:32.458135 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458155 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: I0217 15:28:32.458242 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458247 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458312 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:29:36.458280572 +0000 UTC m=+838.350004430 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: I0217 15:28:32.458365 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458400 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458529 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:36.458498937 +0000 UTC m=+838.350222795 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458683 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:36.45864244 +0000 UTC m=+838.350366298 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458745 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:29:36.458726443 +0000 UTC m=+838.350450301 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:28:32.459602 master-0 kubenswrapper[26425]: E0217 15:28:32.458771 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:29:36.458758264 +0000 UTC m=+838.350482122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:28:47.004225 master-0 kubenswrapper[26425]: I0217 15:28:47.004103 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:28:47.005220 master-0 kubenswrapper[26425]: I0217 15:28:47.005077 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/1.log" Feb 17 15:28:47.007108 master-0 kubenswrapper[26425]: I0217 15:28:47.007046 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/0.log" Feb 17 15:28:47.007935 master-0 kubenswrapper[26425]: I0217 15:28:47.007884 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:28:47.008052 master-0 kubenswrapper[26425]: I0217 15:28:47.007965 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="fcc22a077c839b880ed50e8a8777440b208baa2388423438583030d85d86b3c2" exitCode=1 Feb 17 15:28:47.008150 master-0 kubenswrapper[26425]: I0217 15:28:47.008032 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerDied","Data":"fcc22a077c839b880ed50e8a8777440b208baa2388423438583030d85d86b3c2"} Feb 17 15:28:47.008150 master-0 kubenswrapper[26425]: I0217 15:28:47.008144 26425 scope.go:117] "RemoveContainer" containerID="586cd7bd6a1810c0723f91d86622f61df00ac6288e65656c44c07b725975aa6c" Feb 17 15:28:47.008975 master-0 kubenswrapper[26425]: I0217 15:28:47.008925 26425 scope.go:117] "RemoveContainer" containerID="fcc22a077c839b880ed50e8a8777440b208baa2388423438583030d85d86b3c2" Feb 17 15:28:48.018707 master-0 kubenswrapper[26425]: I0217 15:28:48.018580 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:28:48.019609 master-0 kubenswrapper[26425]: I0217 15:28:48.019571 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/1.log" Feb 17 15:28:48.021664 master-0 kubenswrapper[26425]: I0217 15:28:48.021565 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:28:48.021951 master-0 kubenswrapper[26425]: I0217 15:28:48.021675 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"27fd92ef556705625a2e4f1011322252","Type":"ContainerStarted","Data":"20262a51816e5646882d8f669782a57ee58ac55b3de280aac80c9b4ad5544a09"} Feb 17 15:29:31.383610 master-0 kubenswrapper[26425]: E0217 15:29:31.376445 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle secret-prometheus-k8s-thanos-sidecar-tls secret-prometheus-k8s-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" Feb 17 15:29:31.397131 master-0 kubenswrapper[26425]: E0217 15:29:31.397057 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle secret-alertmanager-main-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" Feb 17 15:29:31.428655 master-0 kubenswrapper[26425]: I0217 15:29:31.428573 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:29:31.428858 master-0 kubenswrapper[26425]: I0217 15:29:31.428653 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:29:32.972748 master-0 kubenswrapper[26425]: I0217 15:29:32.972666 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-knz2d"] Feb 17 15:29:32.973637 master-0 kubenswrapper[26425]: E0217 15:29:32.973127 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c393109-8c98-4a73-be1a-608038e5d094" containerName="metrics-server" Feb 17 15:29:32.973637 master-0 kubenswrapper[26425]: I0217 15:29:32.973152 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c393109-8c98-4a73-be1a-608038e5d094" containerName="metrics-server" Feb 17 15:29:32.973637 master-0 kubenswrapper[26425]: I0217 15:29:32.973393 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c393109-8c98-4a73-be1a-608038e5d094" containerName="metrics-server" Feb 17 15:29:32.974191 master-0 kubenswrapper[26425]: I0217 15:29:32.974136 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:32.977290 master-0 kubenswrapper[26425]: I0217 15:29:32.977205 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:29:32.977507 master-0 kubenswrapper[26425]: I0217 15:29:32.977256 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-v49sf" Feb 17 15:29:33.013646 master-0 kubenswrapper[26425]: I0217 15:29:33.013551 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17d18a53-ef0a-43ed-86ef-d9dad274311f-host\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.013646 master-0 kubenswrapper[26425]: I0217 15:29:33.013637 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjkb\" (UniqueName: \"kubernetes.io/projected/17d18a53-ef0a-43ed-86ef-d9dad274311f-kube-api-access-jmjkb\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.014307 master-0 kubenswrapper[26425]: I0217 15:29:33.014228 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d18a53-ef0a-43ed-86ef-d9dad274311f-serviceca\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.115423 master-0 kubenswrapper[26425]: I0217 15:29:33.115325 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17d18a53-ef0a-43ed-86ef-d9dad274311f-host\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.115708 master-0 kubenswrapper[26425]: I0217 15:29:33.115497 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjkb\" (UniqueName: \"kubernetes.io/projected/17d18a53-ef0a-43ed-86ef-d9dad274311f-kube-api-access-jmjkb\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.115937 master-0 kubenswrapper[26425]: I0217 15:29:33.115869 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17d18a53-ef0a-43ed-86ef-d9dad274311f-host\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.116433 master-0 kubenswrapper[26425]: I0217 15:29:33.116360 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d18a53-ef0a-43ed-86ef-d9dad274311f-serviceca\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.117674 master-0 kubenswrapper[26425]: I0217 15:29:33.117622 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d18a53-ef0a-43ed-86ef-d9dad274311f-serviceca\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.137231 master-0 kubenswrapper[26425]: I0217 15:29:33.137158 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjkb\" (UniqueName: \"kubernetes.io/projected/17d18a53-ef0a-43ed-86ef-d9dad274311f-kube-api-access-jmjkb\") pod \"node-ca-knz2d\" (UID: \"17d18a53-ef0a-43ed-86ef-d9dad274311f\") " pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.314660 master-0 kubenswrapper[26425]: I0217 15:29:33.314425 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-knz2d" Feb 17 15:29:33.450332 master-0 kubenswrapper[26425]: I0217 15:29:33.450205 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-knz2d" event={"ID":"17d18a53-ef0a-43ed-86ef-d9dad274311f","Type":"ContainerStarted","Data":"4f22609bc5c4ef4fe3cbdcf013b1b6b23ade344114698fbe21cc0992c6fbd534"} Feb 17 15:29:36.479766 master-0 kubenswrapper[26425]: I0217 15:29:36.479661 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-knz2d" event={"ID":"17d18a53-ef0a-43ed-86ef-d9dad274311f","Type":"ContainerStarted","Data":"951420a3a386673935d852333d528f406569b7061f3dd1ed34263f3db028cb38"} Feb 17 15:29:36.504443 master-0 kubenswrapper[26425]: I0217 15:29:36.504363 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:29:36.504670 master-0 kubenswrapper[26425]: I0217 15:29:36.504600 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:29:36.504763 master-0 kubenswrapper[26425]: I0217 15:29:36.504715 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:29:36.504950 master-0 kubenswrapper[26425]: E0217 15:29:36.504894 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:29:36.505059 master-0 kubenswrapper[26425]: E0217 15:29:36.505021 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:31:38.504989629 +0000 UTC m=+960.396713487 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 17 15:29:36.505117 master-0 kubenswrapper[26425]: E0217 15:29:36.505049 26425 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 17 15:29:36.505183 master-0 kubenswrapper[26425]: E0217 15:29:36.505159 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:31:38.505129892 +0000 UTC m=+960.396853750 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : secret "prometheus-k8s-tls" not found Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: I0217 15:29:36.505414 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: I0217 15:29:36.505548 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: E0217 15:29:36.505431 26425 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: E0217 15:29:36.505602 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle podName:7284bcca-864c-40df-b7dc-9aecf470697a nodeName:}" failed. No retries permitted until 2026-02-17 15:31:38.505575523 +0000 UTC m=+960.397299381 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: E0217 15:29:36.505740 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:31:38.505713836 +0000 UTC m=+960.397437704 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : secret "alertmanager-main-tls" not found Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: E0217 15:29:36.505791 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle podName:1115aa66-7b5c-4863-aa91-b28baff7e922 nodeName:}" failed. No retries permitted until 2026-02-17 15:31:38.505772168 +0000 UTC m=+960.397496106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:29:36.506807 master-0 kubenswrapper[26425]: I0217 15:29:36.506042 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-knz2d" podStartSLOduration=2.551552776 podStartE2EDuration="4.506020224s" podCreationTimestamp="2026-02-17 15:29:32 +0000 UTC" firstStartedPulling="2026-02-17 15:29:33.357327897 +0000 UTC m=+835.249051745" lastFinishedPulling="2026-02-17 15:29:35.311795335 +0000 UTC m=+837.203519193" observedRunningTime="2026-02-17 15:29:36.502716755 +0000 UTC m=+838.394440643" watchObservedRunningTime="2026-02-17 15:29:36.506020224 +0000 UTC m=+838.397744072" Feb 17 15:29:52.326495 master-0 kubenswrapper[26425]: I0217 15:29:52.326233 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-w62mx"] Feb 17 15:29:52.327381 master-0 kubenswrapper[26425]: I0217 15:29:52.327182 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.333563 master-0 kubenswrapper[26425]: I0217 15:29:52.332979 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8l4dg" Feb 17 15:29:52.333563 master-0 kubenswrapper[26425]: I0217 15:29:52.333339 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:29:52.333937 master-0 kubenswrapper[26425]: I0217 15:29:52.333869 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:29:52.334291 master-0 kubenswrapper[26425]: I0217 15:29:52.334245 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:29:52.334610 master-0 kubenswrapper[26425]: I0217 15:29:52.334567 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:29:52.334855 master-0 kubenswrapper[26425]: I0217 15:29:52.334809 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:29:52.346017 master-0 kubenswrapper[26425]: I0217 15:29:52.345547 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-w62mx"] Feb 17 15:29:52.508746 master-0 kubenswrapper[26425]: I0217 15:29:52.508649 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfjn\" (UniqueName: \"kubernetes.io/projected/505fcdf1-f364-45e5-8583-edf94579d9b2-kube-api-access-lhfjn\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.509153 master-0 kubenswrapper[26425]: I0217 15:29:52.509096 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.509219 master-0 kubenswrapper[26425]: I0217 15:29:52.509181 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505fcdf1-f364-45e5-8583-edf94579d9b2-serving-cert\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.509266 master-0 kubenswrapper[26425]: I0217 15:29:52.509224 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-config\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.611770 master-0 kubenswrapper[26425]: I0217 15:29:52.610627 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.611770 master-0 kubenswrapper[26425]: I0217 15:29:52.610708 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505fcdf1-f364-45e5-8583-edf94579d9b2-serving-cert\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.611770 master-0 kubenswrapper[26425]: I0217 15:29:52.610758 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-config\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.611770 master-0 kubenswrapper[26425]: I0217 15:29:52.610874 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfjn\" (UniqueName: \"kubernetes.io/projected/505fcdf1-f364-45e5-8583-edf94579d9b2-kube-api-access-lhfjn\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.611770 master-0 kubenswrapper[26425]: E0217 15:29:52.611119 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:53.111070563 +0000 UTC m=+855.002794421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:29:52.612643 master-0 kubenswrapper[26425]: I0217 15:29:52.612592 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-config\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.622550 master-0 kubenswrapper[26425]: I0217 15:29:52.617766 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/505fcdf1-f364-45e5-8583-edf94579d9b2-serving-cert\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:52.645180 master-0 kubenswrapper[26425]: I0217 15:29:52.645106 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfjn\" (UniqueName: \"kubernetes.io/projected/505fcdf1-f364-45e5-8583-edf94579d9b2-kube-api-access-lhfjn\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:53.118833 master-0 kubenswrapper[26425]: I0217 15:29:53.118753 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:53.119248 master-0 kubenswrapper[26425]: E0217 15:29:53.119181 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:54.119134612 +0000 UTC m=+856.010858540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:29:54.133346 master-0 kubenswrapper[26425]: I0217 15:29:54.133280 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:54.134211 master-0 kubenswrapper[26425]: E0217 15:29:54.133650 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:29:56.133630982 +0000 UTC m=+858.025354800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:29:56.164881 master-0 kubenswrapper[26425]: I0217 15:29:56.164822 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:29:56.165873 master-0 kubenswrapper[26425]: E0217 15:29:56.165128 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:30:00.16509225 +0000 UTC m=+862.056816098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:30:00.222926 master-0 kubenswrapper[26425]: I0217 15:30:00.222818 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs"] Feb 17 15:30:00.224065 master-0 kubenswrapper[26425]: I0217 15:30:00.223820 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.227545 master-0 kubenswrapper[26425]: I0217 15:30:00.225615 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:30:00.227545 master-0 kubenswrapper[26425]: I0217 15:30:00.225863 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-fqc4f" Feb 17 15:30:00.234785 master-0 kubenswrapper[26425]: I0217 15:30:00.234684 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.235001 master-0 kubenswrapper[26425]: I0217 15:30:00.234862 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:00.235001 master-0 kubenswrapper[26425]: I0217 15:30:00.234899 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm55l\" (UniqueName: \"kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.235097 master-0 kubenswrapper[26425]: I0217 15:30:00.235015 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.235097 master-0 kubenswrapper[26425]: E0217 15:30:00.235025 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:30:08.235005854 +0000 UTC m=+870.126729672 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:30:00.236643 master-0 kubenswrapper[26425]: I0217 15:30:00.236576 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs"] Feb 17 15:30:00.336612 master-0 kubenswrapper[26425]: I0217 15:30:00.336512 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.337364 master-0 kubenswrapper[26425]: I0217 15:30:00.337300 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm55l\" (UniqueName: \"kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.337502 master-0 kubenswrapper[26425]: I0217 15:30:00.337442 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.338403 master-0 kubenswrapper[26425]: I0217 15:30:00.338350 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.340017 master-0 kubenswrapper[26425]: I0217 15:30:00.339969 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.360255 master-0 kubenswrapper[26425]: I0217 15:30:00.359825 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm55l\" (UniqueName: \"kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l\") pod \"collect-profiles-29522370-xqzfs\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.555882 master-0 kubenswrapper[26425]: I0217 15:30:00.555725 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:00.974792 master-0 kubenswrapper[26425]: I0217 15:30:00.974729 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs"] Feb 17 15:30:00.977317 master-0 kubenswrapper[26425]: W0217 15:30:00.977212 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda230c99f_570a_4822_ad0c_8f8052fc667f.slice/crio-891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5 WatchSource:0}: Error finding container 891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5: Status 404 returned error can't find the container with id 891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5 Feb 17 15:30:01.701633 master-0 kubenswrapper[26425]: I0217 15:30:01.701548 26425 generic.go:334] "Generic (PLEG): container finished" podID="a230c99f-570a-4822-ad0c-8f8052fc667f" containerID="531d85836ed5dab3d5cfeea1a836ccd1b3b6e1d0b13e903732c2ea5c862593f9" exitCode=0 Feb 17 15:30:01.701633 master-0 kubenswrapper[26425]: I0217 15:30:01.701606 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" event={"ID":"a230c99f-570a-4822-ad0c-8f8052fc667f","Type":"ContainerDied","Data":"531d85836ed5dab3d5cfeea1a836ccd1b3b6e1d0b13e903732c2ea5c862593f9"} Feb 17 15:30:01.701633 master-0 kubenswrapper[26425]: I0217 15:30:01.701649 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" event={"ID":"a230c99f-570a-4822-ad0c-8f8052fc667f","Type":"ContainerStarted","Data":"891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5"} Feb 17 15:30:03.115223 master-0 kubenswrapper[26425]: I0217 15:30:03.115141 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:03.186119 master-0 kubenswrapper[26425]: I0217 15:30:03.186022 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume\") pod \"a230c99f-570a-4822-ad0c-8f8052fc667f\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " Feb 17 15:30:03.186119 master-0 kubenswrapper[26425]: I0217 15:30:03.186072 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume\") pod \"a230c99f-570a-4822-ad0c-8f8052fc667f\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " Feb 17 15:30:03.186119 master-0 kubenswrapper[26425]: I0217 15:30:03.186110 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm55l\" (UniqueName: \"kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l\") pod \"a230c99f-570a-4822-ad0c-8f8052fc667f\" (UID: \"a230c99f-570a-4822-ad0c-8f8052fc667f\") " Feb 17 15:30:03.187630 master-0 kubenswrapper[26425]: I0217 15:30:03.187574 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a230c99f-570a-4822-ad0c-8f8052fc667f" (UID: "a230c99f-570a-4822-ad0c-8f8052fc667f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:03.190410 master-0 kubenswrapper[26425]: I0217 15:30:03.190330 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a230c99f-570a-4822-ad0c-8f8052fc667f" (UID: "a230c99f-570a-4822-ad0c-8f8052fc667f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:30:03.192406 master-0 kubenswrapper[26425]: I0217 15:30:03.192336 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l" (OuterVolumeSpecName: "kube-api-access-wm55l") pod "a230c99f-570a-4822-ad0c-8f8052fc667f" (UID: "a230c99f-570a-4822-ad0c-8f8052fc667f"). InnerVolumeSpecName "kube-api-access-wm55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:30:03.288599 master-0 kubenswrapper[26425]: I0217 15:30:03.288528 26425 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a230c99f-570a-4822-ad0c-8f8052fc667f-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:03.288599 master-0 kubenswrapper[26425]: I0217 15:30:03.288578 26425 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a230c99f-570a-4822-ad0c-8f8052fc667f-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:03.288599 master-0 kubenswrapper[26425]: I0217 15:30:03.288590 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm55l\" (UniqueName: \"kubernetes.io/projected/a230c99f-570a-4822-ad0c-8f8052fc667f-kube-api-access-wm55l\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:03.721944 master-0 kubenswrapper[26425]: I0217 15:30:03.721893 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" event={"ID":"a230c99f-570a-4822-ad0c-8f8052fc667f","Type":"ContainerDied","Data":"891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5"} Feb 17 15:30:03.721944 master-0 kubenswrapper[26425]: I0217 15:30:03.721943 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="891c4dcbd088c78f2c17120405bfb40f4d544d893a0eef0d5d5c35ee609ab3d5" Feb 17 15:30:03.722190 master-0 kubenswrapper[26425]: I0217 15:30:03.722022 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs" Feb 17 15:30:04.611225 master-0 kubenswrapper[26425]: I0217 15:30:04.611152 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:30:04.611766 master-0 kubenswrapper[26425]: E0217 15:30:04.611333 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:30:04.611766 master-0 kubenswrapper[26425]: E0217 15:30:04.611358 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:30:04.611766 master-0 kubenswrapper[26425]: E0217 15:30:04.611405 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:32:06.611391173 +0000 UTC m=+988.503114991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:30:08.270534 master-0 kubenswrapper[26425]: I0217 15:30:08.270427 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:08.271695 master-0 kubenswrapper[26425]: E0217 15:30:08.270688 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:30:24.27065956 +0000 UTC m=+886.162383408 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:30:19.237227 master-0 kubenswrapper[26425]: I0217 15:30:19.237153 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c5mq6"] Feb 17 15:30:19.238627 master-0 kubenswrapper[26425]: E0217 15:30:19.237428 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230c99f-570a-4822-ad0c-8f8052fc667f" containerName="collect-profiles" Feb 17 15:30:19.238627 master-0 kubenswrapper[26425]: I0217 15:30:19.237441 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230c99f-570a-4822-ad0c-8f8052fc667f" containerName="collect-profiles" Feb 17 15:30:19.238627 master-0 kubenswrapper[26425]: I0217 15:30:19.237599 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a230c99f-570a-4822-ad0c-8f8052fc667f" containerName="collect-profiles" Feb 17 15:30:19.238627 master-0 kubenswrapper[26425]: I0217 15:30:19.238027 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.241007 master-0 kubenswrapper[26425]: I0217 15:30:19.240950 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-tsxrr" Feb 17 15:30:19.241548 master-0 kubenswrapper[26425]: I0217 15:30:19.241451 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 17 15:30:19.252666 master-0 kubenswrapper[26425]: I0217 15:30:19.252605 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.252762 master-0 kubenswrapper[26425]: I0217 15:30:19.252746 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphr8\" (UniqueName: \"kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.253153 master-0 kubenswrapper[26425]: I0217 15:30:19.253091 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.253222 master-0 kubenswrapper[26425]: I0217 15:30:19.253201 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.354744 master-0 kubenswrapper[26425]: I0217 15:30:19.354688 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.354744 master-0 kubenswrapper[26425]: I0217 15:30:19.354757 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.355021 master-0 kubenswrapper[26425]: I0217 15:30:19.354792 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.355021 master-0 kubenswrapper[26425]: I0217 15:30:19.354898 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hphr8\" (UniqueName: \"kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.355387 master-0 kubenswrapper[26425]: I0217 15:30:19.355348 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.356078 master-0 kubenswrapper[26425]: I0217 15:30:19.356056 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.357827 master-0 kubenswrapper[26425]: I0217 15:30:19.357789 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.376142 master-0 kubenswrapper[26425]: I0217 15:30:19.376057 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hphr8\" (UniqueName: \"kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8\") pod \"cni-sysctl-allowlist-ds-c5mq6\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.560446 master-0 kubenswrapper[26425]: I0217 15:30:19.560321 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:19.584856 master-0 kubenswrapper[26425]: W0217 15:30:19.584796 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3c2cc3_5eaa_4447_acca_07c0de21af73.slice/crio-65492e07e25c4dc4a7aa8228e765136a20dce443812f448d9ed6a1c5c1768cf6 WatchSource:0}: Error finding container 65492e07e25c4dc4a7aa8228e765136a20dce443812f448d9ed6a1c5c1768cf6: Status 404 returned error can't find the container with id 65492e07e25c4dc4a7aa8228e765136a20dce443812f448d9ed6a1c5c1768cf6 Feb 17 15:30:19.870525 master-0 kubenswrapper[26425]: I0217 15:30:19.870417 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" event={"ID":"1e3c2cc3-5eaa-4447-acca-07c0de21af73","Type":"ContainerStarted","Data":"65492e07e25c4dc4a7aa8228e765136a20dce443812f448d9ed6a1c5c1768cf6"} Feb 17 15:30:20.386002 master-0 kubenswrapper[26425]: I0217 15:30:20.385913 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6f86647c68-r4plh"] Feb 17 15:30:20.388392 master-0 kubenswrapper[26425]: I0217 15:30:20.388330 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:20.391169 master-0 kubenswrapper[26425]: I0217 15:30:20.391117 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 15:30:20.391362 master-0 kubenswrapper[26425]: I0217 15:30:20.391135 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-qjmzn" Feb 17 15:30:20.414779 master-0 kubenswrapper[26425]: I0217 15:30:20.414701 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6f86647c68-r4plh"] Feb 17 15:30:20.480344 master-0 kubenswrapper[26425]: I0217 15:30:20.475132 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/36481ec3-4dcb-4d1a-a510-b6a61e3c23a7-monitoring-plugin-cert\") pod \"monitoring-plugin-6f86647c68-r4plh\" (UID: \"36481ec3-4dcb-4d1a-a510-b6a61e3c23a7\") " pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:20.576316 master-0 kubenswrapper[26425]: I0217 15:30:20.576259 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/36481ec3-4dcb-4d1a-a510-b6a61e3c23a7-monitoring-plugin-cert\") pod \"monitoring-plugin-6f86647c68-r4plh\" (UID: \"36481ec3-4dcb-4d1a-a510-b6a61e3c23a7\") " pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:20.581016 master-0 kubenswrapper[26425]: I0217 15:30:20.580976 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/36481ec3-4dcb-4d1a-a510-b6a61e3c23a7-monitoring-plugin-cert\") pod \"monitoring-plugin-6f86647c68-r4plh\" (UID: \"36481ec3-4dcb-4d1a-a510-b6a61e3c23a7\") " pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:20.590266 master-0 kubenswrapper[26425]: I0217 15:30:20.590201 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg"] Feb 17 15:30:20.592157 master-0 kubenswrapper[26425]: I0217 15:30:20.592129 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.593760 master-0 kubenswrapper[26425]: I0217 15:30:20.593717 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 17 15:30:20.594231 master-0 kubenswrapper[26425]: I0217 15:30:20.594192 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 17 15:30:20.594520 master-0 kubenswrapper[26425]: I0217 15:30:20.594498 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 17 15:30:20.594973 master-0 kubenswrapper[26425]: I0217 15:30:20.594918 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-44zht" Feb 17 15:30:20.594973 master-0 kubenswrapper[26425]: I0217 15:30:20.594957 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 17 15:30:20.596289 master-0 kubenswrapper[26425]: I0217 15:30:20.596232 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 17 15:30:20.602178 master-0 kubenswrapper[26425]: I0217 15:30:20.602124 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 17 15:30:20.614026 master-0 kubenswrapper[26425]: I0217 15:30:20.613907 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg"] Feb 17 15:30:20.677374 master-0 kubenswrapper[26425]: I0217 15:30:20.677241 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677374 master-0 kubenswrapper[26425]: I0217 15:30:20.677288 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677374 master-0 kubenswrapper[26425]: I0217 15:30:20.677317 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677374 master-0 kubenswrapper[26425]: I0217 15:30:20.677369 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677707 master-0 kubenswrapper[26425]: I0217 15:30:20.677393 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4625\" (UniqueName: \"kubernetes.io/projected/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-kube-api-access-f4625\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677707 master-0 kubenswrapper[26425]: I0217 15:30:20.677425 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677707 master-0 kubenswrapper[26425]: I0217 15:30:20.677440 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.677707 master-0 kubenswrapper[26425]: I0217 15:30:20.677556 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.733551 master-0 kubenswrapper[26425]: I0217 15:30:20.733505 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:20.778507 master-0 kubenswrapper[26425]: I0217 15:30:20.778424 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.778673 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.778838 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.778999 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.779099 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4625\" (UniqueName: \"kubernetes.io/projected/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-kube-api-access-f4625\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.779189 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.779227 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.779634 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.779876 master-0 kubenswrapper[26425]: I0217 15:30:20.779757 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.781979 master-0 kubenswrapper[26425]: I0217 15:30:20.781947 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-metrics-client-ca\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.783549 master-0 kubenswrapper[26425]: I0217 15:30:20.783479 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.784680 master-0 kubenswrapper[26425]: I0217 15:30:20.784635 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-serving-certs-ca-bundle\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.785275 master-0 kubenswrapper[26425]: I0217 15:30:20.785246 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-federate-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.788755 master-0 kubenswrapper[26425]: I0217 15:30:20.788709 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-telemeter-client-tls\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.789003 master-0 kubenswrapper[26425]: I0217 15:30:20.788963 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-secret-telemeter-client\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.801294 master-0 kubenswrapper[26425]: I0217 15:30:20.801255 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4625\" (UniqueName: \"kubernetes.io/projected/33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6-kube-api-access-f4625\") pod \"telemeter-client-7fbdcd9689-jnzwg\" (UID: \"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6\") " pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:20.888370 master-0 kubenswrapper[26425]: I0217 15:30:20.888128 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" event={"ID":"1e3c2cc3-5eaa-4447-acca-07c0de21af73","Type":"ContainerStarted","Data":"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48"} Feb 17 15:30:20.888555 master-0 kubenswrapper[26425]: I0217 15:30:20.888380 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:20.911578 master-0 kubenswrapper[26425]: I0217 15:30:20.911511 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" podStartSLOduration=1.911496509 podStartE2EDuration="1.911496509s" podCreationTimestamp="2026-02-17 15:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:20.908588149 +0000 UTC m=+882.800311977" watchObservedRunningTime="2026-02-17 15:30:20.911496509 +0000 UTC m=+882.803220347" Feb 17 15:30:20.916361 master-0 kubenswrapper[26425]: I0217 15:30:20.916320 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:20.919425 master-0 kubenswrapper[26425]: I0217 15:30:20.919382 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" Feb 17 15:30:21.164437 master-0 kubenswrapper[26425]: W0217 15:30:21.164373 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36481ec3_4dcb_4d1a_a510_b6a61e3c23a7.slice/crio-e50e7e69835f4068a4a4aaf1c820e79ae2de722a358eb0b364d8625224db7af9 WatchSource:0}: Error finding container e50e7e69835f4068a4a4aaf1c820e79ae2de722a358eb0b364d8625224db7af9: Status 404 returned error can't find the container with id e50e7e69835f4068a4a4aaf1c820e79ae2de722a358eb0b364d8625224db7af9 Feb 17 15:30:21.168511 master-0 kubenswrapper[26425]: I0217 15:30:21.168407 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6f86647c68-r4plh"] Feb 17 15:30:21.228810 master-0 kubenswrapper[26425]: I0217 15:30:21.228650 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c5mq6"] Feb 17 15:30:21.390445 master-0 kubenswrapper[26425]: I0217 15:30:21.390363 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg"] Feb 17 15:30:21.401639 master-0 kubenswrapper[26425]: W0217 15:30:21.401566 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33c5c3b7_4b5a_43d9_bbd1_68bbeec585e6.slice/crio-8db1173ff4cce691d92bef45da1af9f84eb771b16c9b9c39f10062a44df9ce5a WatchSource:0}: Error finding container 8db1173ff4cce691d92bef45da1af9f84eb771b16c9b9c39f10062a44df9ce5a: Status 404 returned error can't find the container with id 8db1173ff4cce691d92bef45da1af9f84eb771b16c9b9c39f10062a44df9ce5a Feb 17 15:30:21.898835 master-0 kubenswrapper[26425]: I0217 15:30:21.898748 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" event={"ID":"36481ec3-4dcb-4d1a-a510-b6a61e3c23a7","Type":"ContainerStarted","Data":"e50e7e69835f4068a4a4aaf1c820e79ae2de722a358eb0b364d8625224db7af9"} Feb 17 15:30:21.901870 master-0 kubenswrapper[26425]: I0217 15:30:21.901334 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" event={"ID":"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6","Type":"ContainerStarted","Data":"f871bf7f38fab34aa89341fedf977c1ba856aabe1e544ab6e69ffa9bb420f7aa"} Feb 17 15:30:21.901870 master-0 kubenswrapper[26425]: I0217 15:30:21.901385 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" event={"ID":"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6","Type":"ContainerStarted","Data":"859afd12561b581111a0f5511b4f46f5a276fa131c57891b89fb1f8713dd4145"} Feb 17 15:30:21.901870 master-0 kubenswrapper[26425]: I0217 15:30:21.901396 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" event={"ID":"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6","Type":"ContainerStarted","Data":"8db1173ff4cce691d92bef45da1af9f84eb771b16c9b9c39f10062a44df9ce5a"} Feb 17 15:30:22.910294 master-0 kubenswrapper[26425]: I0217 15:30:22.910250 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" event={"ID":"36481ec3-4dcb-4d1a-a510-b6a61e3c23a7","Type":"ContainerStarted","Data":"ffd53063046a13b95306ba5b5a6fb666578f4889cb01a47900ad30c59e8d7b7e"} Feb 17 15:30:22.911360 master-0 kubenswrapper[26425]: I0217 15:30:22.911337 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:22.916335 master-0 kubenswrapper[26425]: I0217 15:30:22.916302 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" gracePeriod=30 Feb 17 15:30:22.916833 master-0 kubenswrapper[26425]: I0217 15:30:22.916547 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" event={"ID":"33c5c3b7-4b5a-43d9-bbd1-68bbeec585e6","Type":"ContainerStarted","Data":"e11621adcc8869fbdac0a9c728204c2f53c86ad0399fa5c46a4c8f608b9e7aad"} Feb 17 15:30:22.921506 master-0 kubenswrapper[26425]: I0217 15:30:22.920691 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" Feb 17 15:30:22.930594 master-0 kubenswrapper[26425]: I0217 15:30:22.930438 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6f86647c68-r4plh" podStartSLOduration=1.51734926 podStartE2EDuration="2.930409719s" podCreationTimestamp="2026-02-17 15:30:20 +0000 UTC" firstStartedPulling="2026-02-17 15:30:21.167569073 +0000 UTC m=+883.059292881" lastFinishedPulling="2026-02-17 15:30:22.580629512 +0000 UTC m=+884.472353340" observedRunningTime="2026-02-17 15:30:22.928139484 +0000 UTC m=+884.819863372" watchObservedRunningTime="2026-02-17 15:30:22.930409719 +0000 UTC m=+884.822133577" Feb 17 15:30:22.964481 master-0 kubenswrapper[26425]: I0217 15:30:22.964369 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg" podStartSLOduration=2.964352925 podStartE2EDuration="2.964352925s" podCreationTimestamp="2026-02-17 15:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:22.958884033 +0000 UTC m=+884.850607931" watchObservedRunningTime="2026-02-17 15:30:22.964352925 +0000 UTC m=+884.856076743" Feb 17 15:30:24.344788 master-0 kubenswrapper[26425]: I0217 15:30:24.344724 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:24.345763 master-0 kubenswrapper[26425]: E0217 15:30:24.344909 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca podName:505fcdf1-f364-45e5-8583-edf94579d9b2 nodeName:}" failed. No retries permitted until 2026-02-17 15:30:56.344888433 +0000 UTC m=+918.236612251 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca") pod "console-operator-7777d5cc66-w62mx" (UID: "505fcdf1-f364-45e5-8583-edf94579d9b2") : configmap references non-existent config key: ca-bundle.crt Feb 17 15:30:29.564369 master-0 kubenswrapper[26425]: E0217 15:30:29.564244 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:29.566429 master-0 kubenswrapper[26425]: E0217 15:30:29.566309 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:29.568294 master-0 kubenswrapper[26425]: E0217 15:30:29.568208 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:29.568405 master-0 kubenswrapper[26425]: E0217 15:30:29.568306 26425 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:29.636949 master-0 kubenswrapper[26425]: I0217 15:30:29.636883 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-mmnxt"] Feb 17 15:30:29.639082 master-0 kubenswrapper[26425]: I0217 15:30:29.639044 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.652777 master-0 kubenswrapper[26425]: I0217 15:30:29.652707 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-mmnxt"] Feb 17 15:30:29.748881 master-0 kubenswrapper[26425]: I0217 15:30:29.748827 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/745344b6-0a03-48cf-902a-a8f6687d7d79-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.749205 master-0 kubenswrapper[26425]: I0217 15:30:29.749177 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkc4j\" (UniqueName: \"kubernetes.io/projected/745344b6-0a03-48cf-902a-a8f6687d7d79-kube-api-access-hkc4j\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.850399 master-0 kubenswrapper[26425]: I0217 15:30:29.850275 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/745344b6-0a03-48cf-902a-a8f6687d7d79-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.850399 master-0 kubenswrapper[26425]: I0217 15:30:29.850347 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkc4j\" (UniqueName: \"kubernetes.io/projected/745344b6-0a03-48cf-902a-a8f6687d7d79-kube-api-access-hkc4j\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.854029 master-0 kubenswrapper[26425]: I0217 15:30:29.854003 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/745344b6-0a03-48cf-902a-a8f6687d7d79-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.949184 master-0 kubenswrapper[26425]: I0217 15:30:29.949147 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkc4j\" (UniqueName: \"kubernetes.io/projected/745344b6-0a03-48cf-902a-a8f6687d7d79-kube-api-access-hkc4j\") pod \"multus-admission-controller-bb4ff5654-mmnxt\" (UID: \"745344b6-0a03-48cf-902a-a8f6687d7d79\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:29.966217 master-0 kubenswrapper[26425]: I0217 15:30:29.966179 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" Feb 17 15:30:30.439844 master-0 kubenswrapper[26425]: I0217 15:30:30.439770 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-mmnxt"] Feb 17 15:30:30.443824 master-0 kubenswrapper[26425]: W0217 15:30:30.443757 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod745344b6_0a03_48cf_902a_a8f6687d7d79.slice/crio-91f910dd2c43ef2f6c6f40d827243f3c7f3adf75cfc76b9cf3b15dda1cf5acee WatchSource:0}: Error finding container 91f910dd2c43ef2f6c6f40d827243f3c7f3adf75cfc76b9cf3b15dda1cf5acee: Status 404 returned error can't find the container with id 91f910dd2c43ef2f6c6f40d827243f3c7f3adf75cfc76b9cf3b15dda1cf5acee Feb 17 15:30:30.987615 master-0 kubenswrapper[26425]: I0217 15:30:30.987547 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" event={"ID":"745344b6-0a03-48cf-902a-a8f6687d7d79","Type":"ContainerStarted","Data":"518b190b5cee225d9f9ed8de84a58401b06157069cb529da66e8f62ba5794ae2"} Feb 17 15:30:30.987615 master-0 kubenswrapper[26425]: I0217 15:30:30.987616 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" event={"ID":"745344b6-0a03-48cf-902a-a8f6687d7d79","Type":"ContainerStarted","Data":"91f910dd2c43ef2f6c6f40d827243f3c7f3adf75cfc76b9cf3b15dda1cf5acee"} Feb 17 15:30:32.000769 master-0 kubenswrapper[26425]: I0217 15:30:32.000690 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" event={"ID":"745344b6-0a03-48cf-902a-a8f6687d7d79","Type":"ContainerStarted","Data":"da39d9afabe80e2340820863bfd412b77d381703e0b03b6a19b6172d0ab6280a"} Feb 17 15:30:32.038427 master-0 kubenswrapper[26425]: I0217 15:30:32.038258 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-bb4ff5654-mmnxt" podStartSLOduration=3.03822246 podStartE2EDuration="3.03822246s" podCreationTimestamp="2026-02-17 15:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:32.03239875 +0000 UTC m=+893.924122598" watchObservedRunningTime="2026-02-17 15:30:32.03822246 +0000 UTC m=+893.929946308" Feb 17 15:30:32.087815 master-0 kubenswrapper[26425]: I0217 15:30:32.087418 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:30:32.087815 master-0 kubenswrapper[26425]: I0217 15:30:32.087744 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="multus-admission-controller" containerID="cri-o://c8d059fa01ecdc001c9f81953a0f611eee0abc7b2a9ab48cb6c12f655da8d5ed" gracePeriod=30 Feb 17 15:30:32.088736 master-0 kubenswrapper[26425]: I0217 15:30:32.087872 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="kube-rbac-proxy" containerID="cri-o://45dbd4ea79e43e686a9c5871ae5c59474bfc1abca00581679dc4b7c55fb07d49" gracePeriod=30 Feb 17 15:30:33.011139 master-0 kubenswrapper[26425]: I0217 15:30:33.011021 26425 generic.go:334] "Generic (PLEG): container finished" podID="75486ba2-6fde-456f-8846-2af67e58d585" containerID="45dbd4ea79e43e686a9c5871ae5c59474bfc1abca00581679dc4b7c55fb07d49" exitCode=0 Feb 17 15:30:33.011139 master-0 kubenswrapper[26425]: I0217 15:30:33.011094 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerDied","Data":"45dbd4ea79e43e686a9c5871ae5c59474bfc1abca00581679dc4b7c55fb07d49"} Feb 17 15:30:39.564165 master-0 kubenswrapper[26425]: E0217 15:30:39.564063 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:39.566176 master-0 kubenswrapper[26425]: E0217 15:30:39.566123 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:39.568797 master-0 kubenswrapper[26425]: E0217 15:30:39.568746 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:39.568944 master-0 kubenswrapper[26425]: E0217 15:30:39.568808 26425 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:46.837189 master-0 kubenswrapper[26425]: I0217 15:30:46.837143 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn"] Feb 17 15:30:46.837972 master-0 kubenswrapper[26425]: I0217 15:30:46.837958 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:46.840955 master-0 kubenswrapper[26425]: I0217 15:30:46.840925 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nsg9z" Feb 17 15:30:46.841481 master-0 kubenswrapper[26425]: I0217 15:30:46.841441 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:30:46.841562 master-0 kubenswrapper[26425]: I0217 15:30:46.841450 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:30:46.859730 master-0 kubenswrapper[26425]: I0217 15:30:46.859678 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn"] Feb 17 15:30:46.949300 master-0 kubenswrapper[26425]: I0217 15:30:46.949231 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9efc1402-c86a-497b-b563-1cf2fa1a0b48-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:46.949607 master-0 kubenswrapper[26425]: I0217 15:30:46.949561 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.051603 master-0 kubenswrapper[26425]: I0217 15:30:47.051528 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.051816 master-0 kubenswrapper[26425]: I0217 15:30:47.051668 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9efc1402-c86a-497b-b563-1cf2fa1a0b48-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.052084 master-0 kubenswrapper[26425]: E0217 15:30:47.052035 26425 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 15:30:47.052179 master-0 kubenswrapper[26425]: E0217 15:30:47.052147 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert podName:9efc1402-c86a-497b-b563-1cf2fa1a0b48 nodeName:}" failed. No retries permitted until 2026-02-17 15:30:47.552126043 +0000 UTC m=+909.443849871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-72mnn" (UID: "9efc1402-c86a-497b-b563-1cf2fa1a0b48") : secret "networking-console-plugin-cert" not found Feb 17 15:30:47.052678 master-0 kubenswrapper[26425]: I0217 15:30:47.052645 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9efc1402-c86a-497b-b563-1cf2fa1a0b48-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.463926 master-0 kubenswrapper[26425]: I0217 15:30:47.463853 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:30:47.464293 master-0 kubenswrapper[26425]: I0217 15:30:47.464203 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" containerID="cri-o://36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce" gracePeriod=30 Feb 17 15:30:47.544204 master-0 kubenswrapper[26425]: I0217 15:30:47.544129 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:30:47.544434 master-0 kubenswrapper[26425]: I0217 15:30:47.544389 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" containerID="cri-o://6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e" gracePeriod=30 Feb 17 15:30:47.562583 master-0 kubenswrapper[26425]: I0217 15:30:47.562528 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.567078 master-0 kubenswrapper[26425]: I0217 15:30:47.567035 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9efc1402-c86a-497b-b563-1cf2fa1a0b48-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-72mnn\" (UID: \"9efc1402-c86a-497b-b563-1cf2fa1a0b48\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.771264 master-0 kubenswrapper[26425]: I0217 15:30:47.771131 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" Feb 17 15:30:47.906486 master-0 kubenswrapper[26425]: I0217 15:30:47.906419 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:30:47.970981 master-0 kubenswrapper[26425]: I0217 15:30:47.970921 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/6.log" Feb 17 15:30:47.971149 master-0 kubenswrapper[26425]: I0217 15:30:47.971025 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:30:47.973992 master-0 kubenswrapper[26425]: I0217 15:30:47.973951 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") pod \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " Feb 17 15:30:47.974087 master-0 kubenswrapper[26425]: I0217 15:30:47.974057 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") pod \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " Feb 17 15:30:47.974229 master-0 kubenswrapper[26425]: I0217 15:30:47.974200 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") pod \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " Feb 17 15:30:47.974274 master-0 kubenswrapper[26425]: I0217 15:30:47.974247 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") pod \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " Feb 17 15:30:47.974362 master-0 kubenswrapper[26425]: I0217 15:30:47.974338 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") pod \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\" (UID: \"e6d0ea7a-6784-4c13-ad65-6c947dbcf136\") " Feb 17 15:30:47.974721 master-0 kubenswrapper[26425]: I0217 15:30:47.974673 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6d0ea7a-6784-4c13-ad65-6c947dbcf136" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:47.974887 master-0 kubenswrapper[26425]: I0217 15:30:47.974832 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6d0ea7a-6784-4c13-ad65-6c947dbcf136" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:47.975610 master-0 kubenswrapper[26425]: I0217 15:30:47.975569 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config" (OuterVolumeSpecName: "config") pod "e6d0ea7a-6784-4c13-ad65-6c947dbcf136" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:47.975872 master-0 kubenswrapper[26425]: I0217 15:30:47.975824 26425 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:47.975924 master-0 kubenswrapper[26425]: I0217 15:30:47.975887 26425 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:47.977439 master-0 kubenswrapper[26425]: I0217 15:30:47.977386 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4" (OuterVolumeSpecName: "kube-api-access-spcf4") pod "e6d0ea7a-6784-4c13-ad65-6c947dbcf136" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136"). InnerVolumeSpecName "kube-api-access-spcf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:30:47.998035 master-0 kubenswrapper[26425]: I0217 15:30:47.996581 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6d0ea7a-6784-4c13-ad65-6c947dbcf136" (UID: "e6d0ea7a-6784-4c13-ad65-6c947dbcf136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:30:48.076877 master-0 kubenswrapper[26425]: I0217 15:30:48.076759 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") pod \"3db03cef-d297-4bf7-8e52-dd0b18882d07\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " Feb 17 15:30:48.077050 master-0 kubenswrapper[26425]: I0217 15:30:48.076937 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") pod \"3db03cef-d297-4bf7-8e52-dd0b18882d07\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " Feb 17 15:30:48.077050 master-0 kubenswrapper[26425]: I0217 15:30:48.076994 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") pod \"3db03cef-d297-4bf7-8e52-dd0b18882d07\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " Feb 17 15:30:48.077050 master-0 kubenswrapper[26425]: I0217 15:30:48.077035 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") pod \"3db03cef-d297-4bf7-8e52-dd0b18882d07\" (UID: \"3db03cef-d297-4bf7-8e52-dd0b18882d07\") " Feb 17 15:30:48.077659 master-0 kubenswrapper[26425]: I0217 15:30:48.077612 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spcf4\" (UniqueName: \"kubernetes.io/projected/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-kube-api-access-spcf4\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.077712 master-0 kubenswrapper[26425]: I0217 15:30:48.077662 26425 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.077712 master-0 kubenswrapper[26425]: I0217 15:30:48.077677 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d0ea7a-6784-4c13-ad65-6c947dbcf136-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.077777 master-0 kubenswrapper[26425]: I0217 15:30:48.077754 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config" (OuterVolumeSpecName: "config") pod "3db03cef-d297-4bf7-8e52-dd0b18882d07" (UID: "3db03cef-d297-4bf7-8e52-dd0b18882d07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:48.077835 master-0 kubenswrapper[26425]: I0217 15:30:48.077791 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca" (OuterVolumeSpecName: "client-ca") pod "3db03cef-d297-4bf7-8e52-dd0b18882d07" (UID: "3db03cef-d297-4bf7-8e52-dd0b18882d07"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:48.080347 master-0 kubenswrapper[26425]: I0217 15:30:48.080302 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3db03cef-d297-4bf7-8e52-dd0b18882d07" (UID: "3db03cef-d297-4bf7-8e52-dd0b18882d07"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:30:48.080874 master-0 kubenswrapper[26425]: I0217 15:30:48.080812 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27" (OuterVolumeSpecName: "kube-api-access-xrg27") pod "3db03cef-d297-4bf7-8e52-dd0b18882d07" (UID: "3db03cef-d297-4bf7-8e52-dd0b18882d07"). InnerVolumeSpecName "kube-api-access-xrg27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:30:48.150392 master-0 kubenswrapper[26425]: I0217 15:30:48.150327 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6978b88779-vp5tv_3db03cef-d297-4bf7-8e52-dd0b18882d07/route-controller-manager/6.log" Feb 17 15:30:48.150392 master-0 kubenswrapper[26425]: I0217 15:30:48.150376 26425 generic.go:334] "Generic (PLEG): container finished" podID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerID="6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e" exitCode=0 Feb 17 15:30:48.150834 master-0 kubenswrapper[26425]: I0217 15:30:48.150467 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e"} Feb 17 15:30:48.150834 master-0 kubenswrapper[26425]: I0217 15:30:48.150494 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" Feb 17 15:30:48.150834 master-0 kubenswrapper[26425]: I0217 15:30:48.150527 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv" event={"ID":"3db03cef-d297-4bf7-8e52-dd0b18882d07","Type":"ContainerDied","Data":"0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656"} Feb 17 15:30:48.150834 master-0 kubenswrapper[26425]: I0217 15:30:48.150550 26425 scope.go:117] "RemoveContainer" containerID="6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e" Feb 17 15:30:48.154743 master-0 kubenswrapper[26425]: I0217 15:30:48.154682 26425 generic.go:334] "Generic (PLEG): container finished" podID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerID="36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce" exitCode=0 Feb 17 15:30:48.154874 master-0 kubenswrapper[26425]: I0217 15:30:48.154781 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" Feb 17 15:30:48.154874 master-0 kubenswrapper[26425]: I0217 15:30:48.154780 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerDied","Data":"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce"} Feb 17 15:30:48.155017 master-0 kubenswrapper[26425]: I0217 15:30:48.154903 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2" event={"ID":"e6d0ea7a-6784-4c13-ad65-6c947dbcf136","Type":"ContainerDied","Data":"16817c879758d5dca93902f6417f76df9adc387ff018e7fa4b42bb730dfe7417"} Feb 17 15:30:48.206647 master-0 kubenswrapper[26425]: I0217 15:30:48.195612 26425 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.206647 master-0 kubenswrapper[26425]: I0217 15:30:48.195656 26425 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db03cef-d297-4bf7-8e52-dd0b18882d07-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.206647 master-0 kubenswrapper[26425]: I0217 15:30:48.195669 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db03cef-d297-4bf7-8e52-dd0b18882d07-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.206647 master-0 kubenswrapper[26425]: I0217 15:30:48.195685 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrg27\" (UniqueName: \"kubernetes.io/projected/3db03cef-d297-4bf7-8e52-dd0b18882d07-kube-api-access-xrg27\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:48.209225 master-0 kubenswrapper[26425]: I0217 15:30:48.209171 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:30:48.218289 master-0 kubenswrapper[26425]: I0217 15:30:48.218059 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:30:48.234272 master-0 kubenswrapper[26425]: I0217 15:30:48.233610 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv"] Feb 17 15:30:48.242152 master-0 kubenswrapper[26425]: I0217 15:30:48.242088 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:30:48.249568 master-0 kubenswrapper[26425]: I0217 15:30:48.249510 26425 scope.go:117] "RemoveContainer" containerID="6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e" Feb 17 15:30:48.250506 master-0 kubenswrapper[26425]: E0217 15:30:48.250393 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e\": container with ID starting with 6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e not found: ID does not exist" containerID="6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e" Feb 17 15:30:48.250574 master-0 kubenswrapper[26425]: I0217 15:30:48.250539 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e"} err="failed to get container status \"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e\": rpc error: code = NotFound desc = could not find container \"6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e\": container with ID starting with 6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e not found: ID does not exist" Feb 17 15:30:48.250632 master-0 kubenswrapper[26425]: I0217 15:30:48.250614 26425 scope.go:117] "RemoveContainer" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:30:48.251192 master-0 kubenswrapper[26425]: E0217 15:30:48.251016 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5\": container with ID starting with 5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5 not found: ID does not exist" containerID="5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5" Feb 17 15:30:48.251192 master-0 kubenswrapper[26425]: I0217 15:30:48.251058 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5"} err="failed to get container status \"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5\": rpc error: code = NotFound desc = could not find container \"5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5\": container with ID starting with 5b7096a75c410058be5ec1668dc0980747d1943959904a8a5ec23739bf0d73c5 not found: ID does not exist" Feb 17 15:30:48.251192 master-0 kubenswrapper[26425]: I0217 15:30:48.251123 26425 scope.go:117] "RemoveContainer" containerID="36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce" Feb 17 15:30:48.252824 master-0 kubenswrapper[26425]: I0217 15:30:48.252763 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2"] Feb 17 15:30:48.273662 master-0 kubenswrapper[26425]: I0217 15:30:48.273591 26425 scope.go:117] "RemoveContainer" containerID="fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" Feb 17 15:30:48.280498 master-0 kubenswrapper[26425]: I0217 15:30:48.280417 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn"] Feb 17 15:30:48.299584 master-0 kubenswrapper[26425]: I0217 15:30:48.298602 26425 scope.go:117] "RemoveContainer" containerID="36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce" Feb 17 15:30:48.301218 master-0 kubenswrapper[26425]: E0217 15:30:48.300614 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce\": container with ID starting with 36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce not found: ID does not exist" containerID="36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce" Feb 17 15:30:48.301218 master-0 kubenswrapper[26425]: I0217 15:30:48.300732 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce"} err="failed to get container status \"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce\": rpc error: code = NotFound desc = could not find container \"36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce\": container with ID starting with 36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce not found: ID does not exist" Feb 17 15:30:48.301218 master-0 kubenswrapper[26425]: I0217 15:30:48.300825 26425 scope.go:117] "RemoveContainer" containerID="fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" Feb 17 15:30:48.301774 master-0 kubenswrapper[26425]: E0217 15:30:48.301706 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8\": container with ID starting with fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8 not found: ID does not exist" containerID="fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8" Feb 17 15:30:48.301853 master-0 kubenswrapper[26425]: I0217 15:30:48.301778 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8"} err="failed to get container status \"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8\": rpc error: code = NotFound desc = could not find container \"fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8\": container with ID starting with fbf19d6eb89d3cc981a668b940fbc4bb8dd5e78643b56d6ce5b9a6d44a5d26d8 not found: ID does not exist" Feb 17 15:30:48.414263 master-0 kubenswrapper[26425]: I0217 15:30:48.414216 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" path="/var/lib/kubelet/pods/3db03cef-d297-4bf7-8e52-dd0b18882d07/volumes" Feb 17 15:30:48.415335 master-0 kubenswrapper[26425]: I0217 15:30:48.415314 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" path="/var/lib/kubelet/pods/e6d0ea7a-6784-4c13-ad65-6c947dbcf136/volumes" Feb 17 15:30:49.168793 master-0 kubenswrapper[26425]: I0217 15:30:49.168562 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" event={"ID":"9efc1402-c86a-497b-b563-1cf2fa1a0b48","Type":"ContainerStarted","Data":"df9deefb4366bf57ca3e938a7aeea11044354f32e8fb565cba52bb709bf6c7a9"} Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.388083 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f6b44f49-s25nf"] Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.388889 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.388915 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.388949 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.388963 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.388977 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.388988 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.389032 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389043 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.389065 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389075 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389315 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389335 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389354 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389382 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389412 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389427 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389442 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.389486 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.390062 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949"] Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.390311 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.390326 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d0ea7a-6784-4c13-ad65-6c947dbcf136" containerName="controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.390352 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.390363 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: E0217 15:30:49.390381 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.390391 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.390732 master-0 kubenswrapper[26425]: I0217 15:30:49.390761 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:49.393928 master-0 kubenswrapper[26425]: I0217 15:30:49.390854 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.393928 master-0 kubenswrapper[26425]: I0217 15:30:49.391245 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.413523 master-0 kubenswrapper[26425]: I0217 15:30:49.413420 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-proxy-ca-bundles\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.413808 master-0 kubenswrapper[26425]: I0217 15:30:49.413560 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-client-ca\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.413808 master-0 kubenswrapper[26425]: I0217 15:30:49.413628 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1f298f-975b-4eb9-831e-9cb058779f76-serving-cert\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.414008 master-0 kubenswrapper[26425]: I0217 15:30:49.413872 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v48r\" (UniqueName: \"kubernetes.io/projected/7dd4d5ac-06de-467d-b477-ce888546869d-kube-api-access-7v48r\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.414008 master-0 kubenswrapper[26425]: I0217 15:30:49.413934 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-config\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.414325 master-0 kubenswrapper[26425]: I0217 15:30:49.414267 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-client-ca\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.414509 master-0 kubenswrapper[26425]: I0217 15:30:49.414358 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd4d5ac-06de-467d-b477-ce888546869d-serving-cert\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.414509 master-0 kubenswrapper[26425]: I0217 15:30:49.414421 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqxl\" (UniqueName: \"kubernetes.io/projected/5c1f298f-975b-4eb9-831e-9cb058779f76-kube-api-access-5zqxl\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.414719 master-0 kubenswrapper[26425]: I0217 15:30:49.414576 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-config\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.430618 master-0 kubenswrapper[26425]: I0217 15:30:49.430409 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:30:49.430857 master-0 kubenswrapper[26425]: I0217 15:30:49.430814 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8lvkh" Feb 17 15:30:49.431180 master-0 kubenswrapper[26425]: I0217 15:30:49.431060 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:30:49.431350 master-0 kubenswrapper[26425]: I0217 15:30:49.431290 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:30:49.431350 master-0 kubenswrapper[26425]: I0217 15:30:49.431320 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432074 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-q9xjb" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432109 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432206 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432071 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432267 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:30:49.432509 master-0 kubenswrapper[26425]: I0217 15:30:49.432358 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:30:49.434085 master-0 kubenswrapper[26425]: I0217 15:30:49.433894 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:30:49.445479 master-0 kubenswrapper[26425]: I0217 15:30:49.444690 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:30:49.451148 master-0 kubenswrapper[26425]: I0217 15:30:49.451074 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949"] Feb 17 15:30:49.458338 master-0 kubenswrapper[26425]: I0217 15:30:49.458275 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f6b44f49-s25nf"] Feb 17 15:30:49.515775 master-0 kubenswrapper[26425]: I0217 15:30:49.515733 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v48r\" (UniqueName: \"kubernetes.io/projected/7dd4d5ac-06de-467d-b477-ce888546869d-kube-api-access-7v48r\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.516136 master-0 kubenswrapper[26425]: I0217 15:30:49.516111 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-config\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.516323 master-0 kubenswrapper[26425]: I0217 15:30:49.516298 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-client-ca\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.516515 master-0 kubenswrapper[26425]: I0217 15:30:49.516489 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd4d5ac-06de-467d-b477-ce888546869d-serving-cert\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.516759 master-0 kubenswrapper[26425]: I0217 15:30:49.516735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqxl\" (UniqueName: \"kubernetes.io/projected/5c1f298f-975b-4eb9-831e-9cb058779f76-kube-api-access-5zqxl\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.516984 master-0 kubenswrapper[26425]: I0217 15:30:49.516960 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-config\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.517706 master-0 kubenswrapper[26425]: I0217 15:30:49.517665 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-client-ca\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.517936 master-0 kubenswrapper[26425]: I0217 15:30:49.517881 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-proxy-ca-bundles\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.518180 master-0 kubenswrapper[26425]: I0217 15:30:49.517990 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-config\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.518180 master-0 kubenswrapper[26425]: I0217 15:30:49.518025 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-client-ca\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.518332 master-0 kubenswrapper[26425]: I0217 15:30:49.518193 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1f298f-975b-4eb9-831e-9cb058779f76-serving-cert\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.518719 master-0 kubenswrapper[26425]: I0217 15:30:49.518694 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-client-ca\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.519156 master-0 kubenswrapper[26425]: I0217 15:30:49.519117 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd4d5ac-06de-467d-b477-ce888546869d-config\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.519230 master-0 kubenswrapper[26425]: I0217 15:30:49.519207 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c1f298f-975b-4eb9-831e-9cb058779f76-proxy-ca-bundles\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.522123 master-0 kubenswrapper[26425]: I0217 15:30:49.522094 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd4d5ac-06de-467d-b477-ce888546869d-serving-cert\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.522824 master-0 kubenswrapper[26425]: I0217 15:30:49.522801 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c1f298f-975b-4eb9-831e-9cb058779f76-serving-cert\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.535895 master-0 kubenswrapper[26425]: I0217 15:30:49.535852 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqxl\" (UniqueName: \"kubernetes.io/projected/5c1f298f-975b-4eb9-831e-9cb058779f76-kube-api-access-5zqxl\") pod \"controller-manager-f6b44f49-s25nf\" (UID: \"5c1f298f-975b-4eb9-831e-9cb058779f76\") " pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.538598 master-0 kubenswrapper[26425]: I0217 15:30:49.538212 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v48r\" (UniqueName: \"kubernetes.io/projected/7dd4d5ac-06de-467d-b477-ce888546869d-kube-api-access-7v48r\") pod \"route-controller-manager-68f4c9ccfc-vg949\" (UID: \"7dd4d5ac-06de-467d-b477-ce888546869d\") " pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:49.567499 master-0 kubenswrapper[26425]: E0217 15:30:49.566813 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:49.569310 master-0 kubenswrapper[26425]: E0217 15:30:49.569192 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:49.574423 master-0 kubenswrapper[26425]: E0217 15:30:49.573770 26425 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 15:30:49.574423 master-0 kubenswrapper[26425]: E0217 15:30:49.573841 26425 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:49.770864 master-0 kubenswrapper[26425]: I0217 15:30:49.770664 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:49.789011 master-0 kubenswrapper[26425]: I0217 15:30:49.788946 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:50.183484 master-0 kubenswrapper[26425]: I0217 15:30:50.183391 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" event={"ID":"9efc1402-c86a-497b-b563-1cf2fa1a0b48","Type":"ContainerStarted","Data":"12af1c5f7d6ec7e8e8d43ed8cfda0a7f7dff3201b86b214afe97285cbb1c2ff9"} Feb 17 15:30:50.202074 master-0 kubenswrapper[26425]: I0217 15:30:50.201996 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn" podStartSLOduration=2.6691836159999998 podStartE2EDuration="4.201981035s" podCreationTimestamp="2026-02-17 15:30:46 +0000 UTC" firstStartedPulling="2026-02-17 15:30:48.301621382 +0000 UTC m=+910.193345240" lastFinishedPulling="2026-02-17 15:30:49.834418841 +0000 UTC m=+911.726142659" observedRunningTime="2026-02-17 15:30:50.199527396 +0000 UTC m=+912.091251234" watchObservedRunningTime="2026-02-17 15:30:50.201981035 +0000 UTC m=+912.093704853" Feb 17 15:30:50.260274 master-0 kubenswrapper[26425]: I0217 15:30:50.260166 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949"] Feb 17 15:30:50.339943 master-0 kubenswrapper[26425]: I0217 15:30:50.339884 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f6b44f49-s25nf"] Feb 17 15:30:51.193597 master-0 kubenswrapper[26425]: I0217 15:30:51.193538 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" event={"ID":"7dd4d5ac-06de-467d-b477-ce888546869d","Type":"ContainerStarted","Data":"01b31e978db973ffefd23106a54f56672f4d57ac45535b1cdcfd9605377a9621"} Feb 17 15:30:51.193597 master-0 kubenswrapper[26425]: I0217 15:30:51.193592 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" event={"ID":"7dd4d5ac-06de-467d-b477-ce888546869d","Type":"ContainerStarted","Data":"8d4018063e4597f0d143bd21bec38da47d50d0ee2fe59a7c956b9a1134663dfe"} Feb 17 15:30:51.194153 master-0 kubenswrapper[26425]: I0217 15:30:51.193753 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:51.195812 master-0 kubenswrapper[26425]: I0217 15:30:51.195775 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" event={"ID":"5c1f298f-975b-4eb9-831e-9cb058779f76","Type":"ContainerStarted","Data":"89e77e0da8cfc14b67d64844a297b3ef5b7bca2a63a82b5119538883f9b92379"} Feb 17 15:30:51.195879 master-0 kubenswrapper[26425]: I0217 15:30:51.195814 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" event={"ID":"5c1f298f-975b-4eb9-831e-9cb058779f76","Type":"ContainerStarted","Data":"39c122d91bd2bef89c17c4c98cd0ec9d9f8a3e4ef51b6d6264dc3e8e263abe79"} Feb 17 15:30:51.200083 master-0 kubenswrapper[26425]: I0217 15:30:51.200046 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" Feb 17 15:30:51.227187 master-0 kubenswrapper[26425]: I0217 15:30:51.227106 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949" podStartSLOduration=4.227085982 podStartE2EDuration="4.227085982s" podCreationTimestamp="2026-02-17 15:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:51.222543633 +0000 UTC m=+913.114267521" watchObservedRunningTime="2026-02-17 15:30:51.227085982 +0000 UTC m=+913.118809810" Feb 17 15:30:51.273587 master-0 kubenswrapper[26425]: I0217 15:30:51.273495 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" podStartSLOduration=4.273445386 podStartE2EDuration="4.273445386s" podCreationTimestamp="2026-02-17 15:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:51.262021121 +0000 UTC m=+913.153744979" watchObservedRunningTime="2026-02-17 15:30:51.273445386 +0000 UTC m=+913.165169214" Feb 17 15:30:52.204907 master-0 kubenswrapper[26425]: I0217 15:30:52.204865 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:52.212043 master-0 kubenswrapper[26425]: I0217 15:30:52.211980 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f6b44f49-s25nf" Feb 17 15:30:53.053270 master-0 kubenswrapper[26425]: E0217 15:30:53.053207 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d0ea7a_6784_4c13_ad65_6c947dbcf136.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-conmon-6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-0dd6efeec5aa4e3106337fbe40d1f21673b7458663cc20e53895ac682e535656\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d0ea7a_6784_4c13_ad65_6c947dbcf136.slice/crio-conmon-36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice/crio-6ba8200160c9903cb16021849ffd7422dc2a3d84fad4813dffacca7dc355ad4e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db03cef_d297_4bf7_8e52_dd0b18882d07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d0ea7a_6784_4c13_ad65_6c947dbcf136.slice/crio-36a973dffee5f7b1da61a3ec11281aeeb6a1f9016ac9ab35f780b56e938f57ce.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:30:53.054398 master-0 kubenswrapper[26425]: E0217 15:30:53.054289 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3c2cc3_5eaa_4447_acca_07c0de21af73.slice/crio-conmon-85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3c2cc3_5eaa_4447_acca_07c0de21af73.slice/crio-85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:30:53.061849 master-0 kubenswrapper[26425]: I0217 15:30:53.061789 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-c5mq6_1e3c2cc3-5eaa-4447-acca-07c0de21af73/kube-multus-additional-cni-plugins/0.log" Feb 17 15:30:53.061849 master-0 kubenswrapper[26425]: I0217 15:30:53.061863 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:53.154987 master-0 kubenswrapper[26425]: I0217 15:30:53.154844 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Feb 17 15:30:53.155271 master-0 kubenswrapper[26425]: E0217 15:30:53.155112 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:53.155271 master-0 kubenswrapper[26425]: I0217 15:30:53.155124 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db03cef-d297-4bf7-8e52-dd0b18882d07" containerName="route-controller-manager" Feb 17 15:30:53.155271 master-0 kubenswrapper[26425]: E0217 15:30:53.155140 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:53.155271 master-0 kubenswrapper[26425]: I0217 15:30:53.155147 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:53.155573 master-0 kubenswrapper[26425]: I0217 15:30:53.155285 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerName="kube-multus-additional-cni-plugins" Feb 17 15:30:53.155761 master-0 kubenswrapper[26425]: I0217 15:30:53.155719 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.157950 master-0 kubenswrapper[26425]: I0217 15:30:53.157925 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:30:53.158451 master-0 kubenswrapper[26425]: I0217 15:30:53.158273 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:30:53.167845 master-0 kubenswrapper[26425]: I0217 15:30:53.167801 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Feb 17 15:30:53.178058 master-0 kubenswrapper[26425]: I0217 15:30:53.178010 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hphr8\" (UniqueName: \"kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8\") pod \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " Feb 17 15:30:53.178380 master-0 kubenswrapper[26425]: I0217 15:30:53.178349 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist\") pod \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " Feb 17 15:30:53.178598 master-0 kubenswrapper[26425]: I0217 15:30:53.178572 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready\") pod \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " Feb 17 15:30:53.178898 master-0 kubenswrapper[26425]: I0217 15:30:53.178868 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir\") pod \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\" (UID: \"1e3c2cc3-5eaa-4447-acca-07c0de21af73\") " Feb 17 15:30:53.179284 master-0 kubenswrapper[26425]: I0217 15:30:53.179269 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "1e3c2cc3-5eaa-4447-acca-07c0de21af73" (UID: "1e3c2cc3-5eaa-4447-acca-07c0de21af73"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:30:53.179581 master-0 kubenswrapper[26425]: I0217 15:30:53.179565 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready" (OuterVolumeSpecName: "ready") pod "1e3c2cc3-5eaa-4447-acca-07c0de21af73" (UID: "1e3c2cc3-5eaa-4447-acca-07c0de21af73"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:30:53.179671 master-0 kubenswrapper[26425]: I0217 15:30:53.179626 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "1e3c2cc3-5eaa-4447-acca-07c0de21af73" (UID: "1e3c2cc3-5eaa-4447-acca-07c0de21af73"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:30:53.184917 master-0 kubenswrapper[26425]: I0217 15:30:53.184849 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8" (OuterVolumeSpecName: "kube-api-access-hphr8") pod "1e3c2cc3-5eaa-4447-acca-07c0de21af73" (UID: "1e3c2cc3-5eaa-4447-acca-07c0de21af73"). InnerVolumeSpecName "kube-api-access-hphr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:30:53.221037 master-0 kubenswrapper[26425]: I0217 15:30:53.220987 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-c5mq6_1e3c2cc3-5eaa-4447-acca-07c0de21af73/kube-multus-additional-cni-plugins/0.log" Feb 17 15:30:53.221566 master-0 kubenswrapper[26425]: I0217 15:30:53.221039 26425 generic.go:334] "Generic (PLEG): container finished" podID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" exitCode=137 Feb 17 15:30:53.221740 master-0 kubenswrapper[26425]: I0217 15:30:53.221679 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" event={"ID":"1e3c2cc3-5eaa-4447-acca-07c0de21af73","Type":"ContainerDied","Data":"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48"} Feb 17 15:30:53.221816 master-0 kubenswrapper[26425]: I0217 15:30:53.221791 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" Feb 17 15:30:53.222116 master-0 kubenswrapper[26425]: I0217 15:30:53.222010 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c5mq6" event={"ID":"1e3c2cc3-5eaa-4447-acca-07c0de21af73","Type":"ContainerDied","Data":"65492e07e25c4dc4a7aa8228e765136a20dce443812f448d9ed6a1c5c1768cf6"} Feb 17 15:30:53.222116 master-0 kubenswrapper[26425]: I0217 15:30:53.222048 26425 scope.go:117] "RemoveContainer" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" Feb 17 15:30:53.253417 master-0 kubenswrapper[26425]: I0217 15:30:53.253060 26425 scope.go:117] "RemoveContainer" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" Feb 17 15:30:53.254101 master-0 kubenswrapper[26425]: E0217 15:30:53.254027 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48\": container with ID starting with 85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48 not found: ID does not exist" containerID="85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48" Feb 17 15:30:53.254237 master-0 kubenswrapper[26425]: I0217 15:30:53.254131 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48"} err="failed to get container status \"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48\": rpc error: code = NotFound desc = could not find container \"85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48\": container with ID starting with 85b8e3cbeebaacdd0b7ce175ab4ed32d003481298aa58963c69b969640542e48 not found: ID does not exist" Feb 17 15:30:53.281691 master-0 kubenswrapper[26425]: I0217 15:30:53.281618 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.281893 master-0 kubenswrapper[26425]: I0217 15:30:53.281757 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.281893 master-0 kubenswrapper[26425]: I0217 15:30:53.281806 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.281893 master-0 kubenswrapper[26425]: I0217 15:30:53.281869 26425 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1e3c2cc3-5eaa-4447-acca-07c0de21af73-ready\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:53.281893 master-0 kubenswrapper[26425]: I0217 15:30:53.281888 26425 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e3c2cc3-5eaa-4447-acca-07c0de21af73-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:53.282019 master-0 kubenswrapper[26425]: I0217 15:30:53.281905 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hphr8\" (UniqueName: \"kubernetes.io/projected/1e3c2cc3-5eaa-4447-acca-07c0de21af73-kube-api-access-hphr8\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:53.282019 master-0 kubenswrapper[26425]: I0217 15:30:53.281919 26425 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1e3c2cc3-5eaa-4447-acca-07c0de21af73-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 17 15:30:53.288948 master-0 kubenswrapper[26425]: I0217 15:30:53.288903 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c5mq6"] Feb 17 15:30:53.294019 master-0 kubenswrapper[26425]: I0217 15:30:53.293987 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c5mq6"] Feb 17 15:30:53.384653 master-0 kubenswrapper[26425]: I0217 15:30:53.384180 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.384653 master-0 kubenswrapper[26425]: I0217 15:30:53.384276 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.384653 master-0 kubenswrapper[26425]: I0217 15:30:53.384493 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.385650 master-0 kubenswrapper[26425]: I0217 15:30:53.385626 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.386258 master-0 kubenswrapper[26425]: I0217 15:30:53.386206 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.403209 master-0 kubenswrapper[26425]: I0217 15:30:53.403141 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.512330 master-0 kubenswrapper[26425]: I0217 15:30:53.512195 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:30:53.961608 master-0 kubenswrapper[26425]: I0217 15:30:53.960320 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Feb 17 15:30:54.231697 master-0 kubenswrapper[26425]: I0217 15:30:54.231633 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"6fcf23dc-fcf0-47a9-8913-13ad72185f5e","Type":"ContainerStarted","Data":"6e203f95d6a479e5cab09d6037a7ccb34ec5bf12bf5974d94825a27a79a69367"} Feb 17 15:30:54.403951 master-0 kubenswrapper[26425]: I0217 15:30:54.403877 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3c2cc3-5eaa-4447-acca-07c0de21af73" path="/var/lib/kubelet/pods/1e3c2cc3-5eaa-4447-acca-07c0de21af73/volumes" Feb 17 15:30:55.242695 master-0 kubenswrapper[26425]: I0217 15:30:55.242631 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"6fcf23dc-fcf0-47a9-8913-13ad72185f5e","Type":"ContainerStarted","Data":"d7945383b92e3dee004f018926f5d6539c9dca46af3ccb5c1fa7f5279fe1f9e2"} Feb 17 15:30:55.267084 master-0 kubenswrapper[26425]: I0217 15:30:55.266975 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" podStartSLOduration=2.266947523 podStartE2EDuration="2.266947523s" podCreationTimestamp="2026-02-17 15:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:30:55.261972124 +0000 UTC m=+917.153695942" watchObservedRunningTime="2026-02-17 15:30:55.266947523 +0000 UTC m=+917.158671381" Feb 17 15:30:56.440342 master-0 kubenswrapper[26425]: I0217 15:30:56.440255 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:56.442509 master-0 kubenswrapper[26425]: I0217 15:30:56.441614 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/505fcdf1-f364-45e5-8583-edf94579d9b2-trusted-ca\") pod \"console-operator-7777d5cc66-w62mx\" (UID: \"505fcdf1-f364-45e5-8583-edf94579d9b2\") " pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:56.561287 master-0 kubenswrapper[26425]: I0217 15:30:56.561238 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8l4dg" Feb 17 15:30:56.570375 master-0 kubenswrapper[26425]: I0217 15:30:56.570328 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:30:57.079500 master-0 kubenswrapper[26425]: I0217 15:30:57.079412 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-w62mx"] Feb 17 15:30:57.083132 master-0 kubenswrapper[26425]: W0217 15:30:57.083077 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505fcdf1_f364_45e5_8583_edf94579d9b2.slice/crio-79043bfc5d955ab102954f788f7a4ba4412e09341ebe76cb1b804337f2c809f8 WatchSource:0}: Error finding container 79043bfc5d955ab102954f788f7a4ba4412e09341ebe76cb1b804337f2c809f8: Status 404 returned error can't find the container with id 79043bfc5d955ab102954f788f7a4ba4412e09341ebe76cb1b804337f2c809f8 Feb 17 15:30:57.260879 master-0 kubenswrapper[26425]: I0217 15:30:57.260802 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" event={"ID":"505fcdf1-f364-45e5-8583-edf94579d9b2","Type":"ContainerStarted","Data":"79043bfc5d955ab102954f788f7a4ba4412e09341ebe76cb1b804337f2c809f8"} Feb 17 15:31:00.294413 master-0 kubenswrapper[26425]: I0217 15:31:00.294354 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" event={"ID":"505fcdf1-f364-45e5-8583-edf94579d9b2","Type":"ContainerStarted","Data":"70eb416715014627b34e8b64fdf58c2570acc8fd068d86cb5fc94c87f8df0b46"} Feb 17 15:31:00.295546 master-0 kubenswrapper[26425]: I0217 15:31:00.294784 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:31:00.315258 master-0 kubenswrapper[26425]: I0217 15:31:00.315200 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" Feb 17 15:31:00.329933 master-0 kubenswrapper[26425]: I0217 15:31:00.329775 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-7777d5cc66-w62mx" podStartSLOduration=66.047253252 podStartE2EDuration="1m8.329742829s" podCreationTimestamp="2026-02-17 15:29:52 +0000 UTC" firstStartedPulling="2026-02-17 15:30:57.087311612 +0000 UTC m=+918.979035440" lastFinishedPulling="2026-02-17 15:30:59.369801199 +0000 UTC m=+921.261525017" observedRunningTime="2026-02-17 15:31:00.317966396 +0000 UTC m=+922.209690254" watchObservedRunningTime="2026-02-17 15:31:00.329742829 +0000 UTC m=+922.221466687" Feb 17 15:31:00.474972 master-0 kubenswrapper[26425]: I0217 15:31:00.474901 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-dcd7b7d95-vtnfs"] Feb 17 15:31:00.476065 master-0 kubenswrapper[26425]: I0217 15:31:00.476027 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:00.477676 master-0 kubenswrapper[26425]: I0217 15:31:00.477621 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-bt8x4" Feb 17 15:31:00.478065 master-0 kubenswrapper[26425]: I0217 15:31:00.477935 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:31:00.478141 master-0 kubenswrapper[26425]: I0217 15:31:00.478069 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:31:00.485738 master-0 kubenswrapper[26425]: I0217 15:31:00.485624 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-vtnfs"] Feb 17 15:31:00.512524 master-0 kubenswrapper[26425]: I0217 15:31:00.512477 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkjz\" (UniqueName: \"kubernetes.io/projected/ce863132-1dfd-40e4-b8df-0f699ac5f4cc-kube-api-access-swkjz\") pod \"downloads-dcd7b7d95-vtnfs\" (UID: \"ce863132-1dfd-40e4-b8df-0f699ac5f4cc\") " pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:00.613897 master-0 kubenswrapper[26425]: I0217 15:31:00.613751 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swkjz\" (UniqueName: \"kubernetes.io/projected/ce863132-1dfd-40e4-b8df-0f699ac5f4cc-kube-api-access-swkjz\") pod \"downloads-dcd7b7d95-vtnfs\" (UID: \"ce863132-1dfd-40e4-b8df-0f699ac5f4cc\") " pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:00.637329 master-0 kubenswrapper[26425]: I0217 15:31:00.637268 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swkjz\" (UniqueName: \"kubernetes.io/projected/ce863132-1dfd-40e4-b8df-0f699ac5f4cc-kube-api-access-swkjz\") pod \"downloads-dcd7b7d95-vtnfs\" (UID: \"ce863132-1dfd-40e4-b8df-0f699ac5f4cc\") " pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:00.807169 master-0 kubenswrapper[26425]: I0217 15:31:00.807076 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:01.272231 master-0 kubenswrapper[26425]: I0217 15:31:01.272092 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-vtnfs"] Feb 17 15:31:01.282374 master-0 kubenswrapper[26425]: W0217 15:31:01.282307 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce863132_1dfd_40e4_b8df_0f699ac5f4cc.slice/crio-8c569293e00f5921befdedcac6924c3de9b9857607f353aee8c5867bb9df02ac WatchSource:0}: Error finding container 8c569293e00f5921befdedcac6924c3de9b9857607f353aee8c5867bb9df02ac: Status 404 returned error can't find the container with id 8c569293e00f5921befdedcac6924c3de9b9857607f353aee8c5867bb9df02ac Feb 17 15:31:01.302381 master-0 kubenswrapper[26425]: I0217 15:31:01.302297 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-vtnfs" event={"ID":"ce863132-1dfd-40e4-b8df-0f699ac5f4cc","Type":"ContainerStarted","Data":"8c569293e00f5921befdedcac6924c3de9b9857607f353aee8c5867bb9df02ac"} Feb 17 15:31:02.314727 master-0 kubenswrapper[26425]: I0217 15:31:02.314668 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-rzbff_75486ba2-6fde-456f-8846-2af67e58d585/multus-admission-controller/0.log" Feb 17 15:31:02.314727 master-0 kubenswrapper[26425]: I0217 15:31:02.314730 26425 generic.go:334] "Generic (PLEG): container finished" podID="75486ba2-6fde-456f-8846-2af67e58d585" containerID="c8d059fa01ecdc001c9f81953a0f611eee0abc7b2a9ab48cb6c12f655da8d5ed" exitCode=137 Feb 17 15:31:02.316822 master-0 kubenswrapper[26425]: I0217 15:31:02.315277 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerDied","Data":"c8d059fa01ecdc001c9f81953a0f611eee0abc7b2a9ab48cb6c12f655da8d5ed"} Feb 17 15:31:03.044496 master-0 kubenswrapper[26425]: I0217 15:31:03.044404 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-rzbff_75486ba2-6fde-456f-8846-2af67e58d585/multus-admission-controller/0.log" Feb 17 15:31:03.044496 master-0 kubenswrapper[26425]: I0217 15:31:03.044501 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:31:03.049513 master-0 kubenswrapper[26425]: I0217 15:31:03.049430 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") pod \"75486ba2-6fde-456f-8846-2af67e58d585\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " Feb 17 15:31:03.049513 master-0 kubenswrapper[26425]: I0217 15:31:03.049507 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") pod \"75486ba2-6fde-456f-8846-2af67e58d585\" (UID: \"75486ba2-6fde-456f-8846-2af67e58d585\") " Feb 17 15:31:03.052423 master-0 kubenswrapper[26425]: I0217 15:31:03.052366 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "75486ba2-6fde-456f-8846-2af67e58d585" (UID: "75486ba2-6fde-456f-8846-2af67e58d585"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:31:03.052564 master-0 kubenswrapper[26425]: I0217 15:31:03.052447 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95" (OuterVolumeSpecName: "kube-api-access-wjb95") pod "75486ba2-6fde-456f-8846-2af67e58d585" (UID: "75486ba2-6fde-456f-8846-2af67e58d585"). InnerVolumeSpecName "kube-api-access-wjb95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:31:03.150930 master-0 kubenswrapper[26425]: I0217 15:31:03.150864 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjb95\" (UniqueName: \"kubernetes.io/projected/75486ba2-6fde-456f-8846-2af67e58d585-kube-api-access-wjb95\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:03.150930 master-0 kubenswrapper[26425]: I0217 15:31:03.150930 26425 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75486ba2-6fde-456f-8846-2af67e58d585-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:03.325868 master-0 kubenswrapper[26425]: I0217 15:31:03.325758 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-rzbff_75486ba2-6fde-456f-8846-2af67e58d585/multus-admission-controller/0.log" Feb 17 15:31:03.325868 master-0 kubenswrapper[26425]: I0217 15:31:03.325848 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" event={"ID":"75486ba2-6fde-456f-8846-2af67e58d585","Type":"ContainerDied","Data":"79cd9922eddeda66f86396279d7c2d92bdfdde5d55f7ab9b86712ce128d7d382"} Feb 17 15:31:03.326464 master-0 kubenswrapper[26425]: I0217 15:31:03.325909 26425 scope.go:117] "RemoveContainer" containerID="45dbd4ea79e43e686a9c5871ae5c59474bfc1abca00581679dc4b7c55fb07d49" Feb 17 15:31:03.326464 master-0 kubenswrapper[26425]: I0217 15:31:03.325978 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-rzbff" Feb 17 15:31:03.345892 master-0 kubenswrapper[26425]: I0217 15:31:03.345805 26425 scope.go:117] "RemoveContainer" containerID="c8d059fa01ecdc001c9f81953a0f611eee0abc7b2a9ab48cb6c12f655da8d5ed" Feb 17 15:31:03.369308 master-0 kubenswrapper[26425]: I0217 15:31:03.369246 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:31:03.372781 master-0 kubenswrapper[26425]: I0217 15:31:03.372729 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-rzbff"] Feb 17 15:31:03.781138 master-0 kubenswrapper[26425]: I0217 15:31:03.781045 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 17 15:31:03.781482 master-0 kubenswrapper[26425]: E0217 15:31:03.781400 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="multus-admission-controller" Feb 17 15:31:03.781482 master-0 kubenswrapper[26425]: I0217 15:31:03.781415 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="multus-admission-controller" Feb 17 15:31:03.781482 master-0 kubenswrapper[26425]: E0217 15:31:03.781444 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="kube-rbac-proxy" Feb 17 15:31:03.781482 master-0 kubenswrapper[26425]: I0217 15:31:03.781468 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="kube-rbac-proxy" Feb 17 15:31:03.781773 master-0 kubenswrapper[26425]: I0217 15:31:03.781601 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="multus-admission-controller" Feb 17 15:31:03.781773 master-0 kubenswrapper[26425]: I0217 15:31:03.781626 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="75486ba2-6fde-456f-8846-2af67e58d585" containerName="kube-rbac-proxy" Feb 17 15:31:03.782156 master-0 kubenswrapper[26425]: I0217 15:31:03.782112 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:03.784169 master-0 kubenswrapper[26425]: I0217 15:31:03.784102 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 17 15:31:03.784401 master-0 kubenswrapper[26425]: I0217 15:31:03.784357 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-87grw" Feb 17 15:31:03.785269 master-0 kubenswrapper[26425]: I0217 15:31:03.785227 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:31:03.961686 master-0 kubenswrapper[26425]: I0217 15:31:03.961422 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:03.961686 master-0 kubenswrapper[26425]: I0217 15:31:03.961492 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:03.961686 master-0 kubenswrapper[26425]: I0217 15:31:03.961645 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.062899 master-0 kubenswrapper[26425]: I0217 15:31:04.062689 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.062899 master-0 kubenswrapper[26425]: I0217 15:31:04.062732 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.062899 master-0 kubenswrapper[26425]: I0217 15:31:04.062772 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.063343 master-0 kubenswrapper[26425]: I0217 15:31:04.062914 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.063343 master-0 kubenswrapper[26425]: I0217 15:31:04.063196 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.084975 master-0 kubenswrapper[26425]: I0217 15:31:04.084918 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.104441 master-0 kubenswrapper[26425]: I0217 15:31:04.104384 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:04.406302 master-0 kubenswrapper[26425]: I0217 15:31:04.406246 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75486ba2-6fde-456f-8846-2af67e58d585" path="/var/lib/kubelet/pods/75486ba2-6fde-456f-8846-2af67e58d585/volumes" Feb 17 15:31:04.530101 master-0 kubenswrapper[26425]: I0217 15:31:04.530057 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 17 15:31:04.589679 master-0 kubenswrapper[26425]: I0217 15:31:04.589603 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:31:04.590573 master-0 kubenswrapper[26425]: I0217 15:31:04.590521 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.608481 master-0 kubenswrapper[26425]: I0217 15:31:04.607331 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:31:04.608481 master-0 kubenswrapper[26425]: I0217 15:31:04.607692 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-jf6tv" Feb 17 15:31:04.608481 master-0 kubenswrapper[26425]: I0217 15:31:04.608009 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:31:04.608481 master-0 kubenswrapper[26425]: I0217 15:31:04.608234 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:31:04.608481 master-0 kubenswrapper[26425]: I0217 15:31:04.608376 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:31:04.614084 master-0 kubenswrapper[26425]: I0217 15:31:04.614042 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:31:04.653571 master-0 kubenswrapper[26425]: I0217 15:31:04.645529 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.687915 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.687971 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.688025 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.688053 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.688118 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.689745 master-0 kubenswrapper[26425]: I0217 15:31:04.688152 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2mw\" (UniqueName: \"kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.788687 master-0 kubenswrapper[26425]: I0217 15:31:04.788656 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.788816 master-0 kubenswrapper[26425]: I0217 15:31:04.788699 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.788816 master-0 kubenswrapper[26425]: I0217 15:31:04.788766 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.788816 master-0 kubenswrapper[26425]: I0217 15:31:04.788791 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2mw\" (UniqueName: \"kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.789518 master-0 kubenswrapper[26425]: I0217 15:31:04.789179 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.789518 master-0 kubenswrapper[26425]: I0217 15:31:04.789241 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.789883 master-0 kubenswrapper[26425]: I0217 15:31:04.789712 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.790014 master-0 kubenswrapper[26425]: I0217 15:31:04.789974 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.790098 master-0 kubenswrapper[26425]: I0217 15:31:04.790072 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.800998 master-0 kubenswrapper[26425]: I0217 15:31:04.795173 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.800998 master-0 kubenswrapper[26425]: I0217 15:31:04.798740 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.805792 master-0 kubenswrapper[26425]: I0217 15:31:04.805757 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2mw\" (UniqueName: \"kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw\") pod \"console-98f66b5dc-p2gxf\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:04.965756 master-0 kubenswrapper[26425]: I0217 15:31:04.965606 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:05.344041 master-0 kubenswrapper[26425]: I0217 15:31:05.343796 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"78028ec2-59c0-459d-b148-e84842b5aea8","Type":"ContainerStarted","Data":"0872c44be2b2c46697b1111d6bcc7da9349617a884aa419f69046c275a840215"} Feb 17 15:31:05.344041 master-0 kubenswrapper[26425]: I0217 15:31:05.343857 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"78028ec2-59c0-459d-b148-e84842b5aea8","Type":"ContainerStarted","Data":"c077e376a933343ed0b15736f5f88ca9163435967caafc4f5f0b0a3c6e77b1d0"} Feb 17 15:31:05.372017 master-0 kubenswrapper[26425]: I0217 15:31:05.365959 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.365943035 podStartE2EDuration="2.365943035s" podCreationTimestamp="2026-02-17 15:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:31:05.361239612 +0000 UTC m=+927.252963460" watchObservedRunningTime="2026-02-17 15:31:05.365943035 +0000 UTC m=+927.257666853" Feb 17 15:31:05.440355 master-0 kubenswrapper[26425]: I0217 15:31:05.440305 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:31:05.443981 master-0 kubenswrapper[26425]: W0217 15:31:05.443932 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2535f316_0ff0_4cca_9736_181406061b4e.slice/crio-b16741e26cc181e81d7e4f62aa08f75954b27a237d8c892a7e4f56cf6e6a1b53 WatchSource:0}: Error finding container b16741e26cc181e81d7e4f62aa08f75954b27a237d8c892a7e4f56cf6e6a1b53: Status 404 returned error can't find the container with id b16741e26cc181e81d7e4f62aa08f75954b27a237d8c892a7e4f56cf6e6a1b53 Feb 17 15:31:06.356084 master-0 kubenswrapper[26425]: I0217 15:31:06.356033 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98f66b5dc-p2gxf" event={"ID":"2535f316-0ff0-4cca-9736-181406061b4e","Type":"ContainerStarted","Data":"b16741e26cc181e81d7e4f62aa08f75954b27a237d8c892a7e4f56cf6e6a1b53"} Feb 17 15:31:09.384079 master-0 kubenswrapper[26425]: I0217 15:31:09.384005 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98f66b5dc-p2gxf" event={"ID":"2535f316-0ff0-4cca-9736-181406061b4e","Type":"ContainerStarted","Data":"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9"} Feb 17 15:31:09.408118 master-0 kubenswrapper[26425]: I0217 15:31:09.408003 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-98f66b5dc-p2gxf" podStartSLOduration=1.829924465 podStartE2EDuration="5.40798222s" podCreationTimestamp="2026-02-17 15:31:04 +0000 UTC" firstStartedPulling="2026-02-17 15:31:05.446373227 +0000 UTC m=+927.338097045" lastFinishedPulling="2026-02-17 15:31:09.024430982 +0000 UTC m=+930.916154800" observedRunningTime="2026-02-17 15:31:09.406918214 +0000 UTC m=+931.298642082" watchObservedRunningTime="2026-02-17 15:31:09.40798222 +0000 UTC m=+931.299706058" Feb 17 15:31:13.932242 master-0 kubenswrapper[26425]: I0217 15:31:13.932169 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:31:13.933104 master-0 kubenswrapper[26425]: I0217 15:31:13.933035 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:13.947688 master-0 kubenswrapper[26425]: I0217 15:31:13.945903 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:31:13.948609 master-0 kubenswrapper[26425]: I0217 15:31:13.948550 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:31:14.044558 master-0 kubenswrapper[26425]: I0217 15:31:14.044490 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.044558 master-0 kubenswrapper[26425]: I0217 15:31:14.044561 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.044848 master-0 kubenswrapper[26425]: I0217 15:31:14.044609 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.044848 master-0 kubenswrapper[26425]: I0217 15:31:14.044746 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.044848 master-0 kubenswrapper[26425]: I0217 15:31:14.044799 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.044977 master-0 kubenswrapper[26425]: I0217 15:31:14.044858 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.045021 master-0 kubenswrapper[26425]: I0217 15:31:14.044995 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgj4r\" (UniqueName: \"kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.146241 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147127 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147249 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147348 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147426 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147550 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147756 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgj4r\" (UniqueName: \"kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.147793 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.150532 master-0 kubenswrapper[26425]: I0217 15:31:14.148095 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.151983 master-0 kubenswrapper[26425]: I0217 15:31:14.151535 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.151983 master-0 kubenswrapper[26425]: I0217 15:31:14.151886 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.152086 master-0 kubenswrapper[26425]: I0217 15:31:14.152012 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.153612 master-0 kubenswrapper[26425]: I0217 15:31:14.153540 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.169424 master-0 kubenswrapper[26425]: I0217 15:31:14.169309 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgj4r\" (UniqueName: \"kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r\") pod \"console-86d4dfb9dd-rz6cj\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.285438 master-0 kubenswrapper[26425]: I0217 15:31:14.285302 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:14.691830 master-0 kubenswrapper[26425]: I0217 15:31:14.691779 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:31:14.966319 master-0 kubenswrapper[26425]: I0217 15:31:14.966210 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:14.966803 master-0 kubenswrapper[26425]: I0217 15:31:14.966444 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:31:14.967435 master-0 kubenswrapper[26425]: I0217 15:31:14.967268 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:31:14.967435 master-0 kubenswrapper[26425]: I0217 15:31:14.967318 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:31:15.431897 master-0 kubenswrapper[26425]: I0217 15:31:15.431827 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d4dfb9dd-rz6cj" event={"ID":"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f","Type":"ContainerStarted","Data":"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829"} Feb 17 15:31:15.431897 master-0 kubenswrapper[26425]: I0217 15:31:15.431895 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d4dfb9dd-rz6cj" event={"ID":"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f","Type":"ContainerStarted","Data":"e6fe88c0b99e2c4c35d64c324497a6422afd688d4fa9aff82e8e04c1cbc8087b"} Feb 17 15:31:15.463631 master-0 kubenswrapper[26425]: I0217 15:31:15.463547 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86d4dfb9dd-rz6cj" podStartSLOduration=2.463525365 podStartE2EDuration="2.463525365s" podCreationTimestamp="2026-02-17 15:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:31:15.461336862 +0000 UTC m=+937.353060700" watchObservedRunningTime="2026-02-17 15:31:15.463525365 +0000 UTC m=+937.355249183" Feb 17 15:31:24.286296 master-0 kubenswrapper[26425]: I0217 15:31:24.286240 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:24.286296 master-0 kubenswrapper[26425]: I0217 15:31:24.286297 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:31:24.288425 master-0 kubenswrapper[26425]: I0217 15:31:24.288383 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:31:24.288615 master-0 kubenswrapper[26425]: I0217 15:31:24.288580 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:31:24.967179 master-0 kubenswrapper[26425]: I0217 15:31:24.967134 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:31:24.967394 master-0 kubenswrapper[26425]: I0217 15:31:24.967190 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:31:27.109476 master-0 kubenswrapper[26425]: I0217 15:31:27.109403 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:31:27.110118 master-0 kubenswrapper[26425]: I0217 15:31:27.110067 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://20262a51816e5646882d8f669782a57ee58ac55b3de280aac80c9b4ad5544a09" gracePeriod=30 Feb 17 15:31:27.110233 master-0 kubenswrapper[26425]: I0217 15:31:27.110048 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" containerID="cri-o://efbfaa97348e69f0a49f3b2b302caecfbc9e14afd8c93921c11c9974de1b8c57" gracePeriod=30 Feb 17 15:31:27.110317 master-0 kubenswrapper[26425]: I0217 15:31:27.110165 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" containerID="cri-o://4f3d983d4ccc46ef27a60861c65b81497fdb8faa3d16615f0e7d839d7e92efb0" gracePeriod=30 Feb 17 15:31:27.110375 master-0 kubenswrapper[26425]: I0217 15:31:27.110329 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://35fe638f6458381f305a5bf70c5f72c08dfe6647c1374e528fdd2425345b92ec" gracePeriod=30 Feb 17 15:31:27.111764 master-0 kubenswrapper[26425]: I0217 15:31:27.111720 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:31:27.112297 master-0 kubenswrapper[26425]: E0217 15:31:27.112260 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.112356 master-0 kubenswrapper[26425]: I0217 15:31:27.112308 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.112356 master-0 kubenswrapper[26425]: E0217 15:31:27.112343 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.112420 master-0 kubenswrapper[26425]: I0217 15:31:27.112363 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.112420 master-0 kubenswrapper[26425]: E0217 15:31:27.112402 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112524 master-0 kubenswrapper[26425]: I0217 15:31:27.112421 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112524 master-0 kubenswrapper[26425]: E0217 15:31:27.112443 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112524 master-0 kubenswrapper[26425]: I0217 15:31:27.112494 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112524 master-0 kubenswrapper[26425]: E0217 15:31:27.112518 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112640 master-0 kubenswrapper[26425]: I0217 15:31:27.112536 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112640 master-0 kubenswrapper[26425]: E0217 15:31:27.112589 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112640 master-0 kubenswrapper[26425]: I0217 15:31:27.112606 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112726 master-0 kubenswrapper[26425]: E0217 15:31:27.112637 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112726 master-0 kubenswrapper[26425]: I0217 15:31:27.112654 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.112726 master-0 kubenswrapper[26425]: E0217 15:31:27.112682 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112726 master-0 kubenswrapper[26425]: I0217 15:31:27.112700 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.112836 master-0 kubenswrapper[26425]: E0217 15:31:27.112744 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-recovery-controller" Feb 17 15:31:27.112836 master-0 kubenswrapper[26425]: I0217 15:31:27.112764 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-recovery-controller" Feb 17 15:31:27.113130 master-0 kubenswrapper[26425]: I0217 15:31:27.113095 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.113183 master-0 kubenswrapper[26425]: I0217 15:31:27.113154 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113215 master-0 kubenswrapper[26425]: I0217 15:31:27.113178 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.113247 master-0 kubenswrapper[26425]: I0217 15:31:27.113216 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113278 master-0 kubenswrapper[26425]: I0217 15:31:27.113258 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-cert-syncer" Feb 17 15:31:27.113313 master-0 kubenswrapper[26425]: I0217 15:31:27.113293 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.113402 master-0 kubenswrapper[26425]: I0217 15:31:27.113322 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113402 master-0 kubenswrapper[26425]: I0217 15:31:27.113347 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager-recovery-controller" Feb 17 15:31:27.113402 master-0 kubenswrapper[26425]: I0217 15:31:27.113370 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="kube-controller-manager" Feb 17 15:31:27.113523 master-0 kubenswrapper[26425]: I0217 15:31:27.113401 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113523 master-0 kubenswrapper[26425]: I0217 15:31:27.113435 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113523 master-0 kubenswrapper[26425]: I0217 15:31:27.113498 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113829 master-0 kubenswrapper[26425]: E0217 15:31:27.113796 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113884 master-0 kubenswrapper[26425]: I0217 15:31:27.113835 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113918 master-0 kubenswrapper[26425]: E0217 15:31:27.113886 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.113918 master-0 kubenswrapper[26425]: I0217 15:31:27.113905 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.115197 master-0 kubenswrapper[26425]: E0217 15:31:27.114974 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.115197 master-0 kubenswrapper[26425]: I0217 15:31:27.115190 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd92ef556705625a2e4f1011322252" containerName="cluster-policy-controller" Feb 17 15:31:27.192989 master-0 kubenswrapper[26425]: I0217 15:31:27.192945 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.193130 master-0 kubenswrapper[26425]: I0217 15:31:27.193013 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.295215 master-0 kubenswrapper[26425]: I0217 15:31:27.295137 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.295482 master-0 kubenswrapper[26425]: I0217 15:31:27.295227 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.295482 master-0 kubenswrapper[26425]: I0217 15:31:27.295416 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.295482 master-0 kubenswrapper[26425]: I0217 15:31:27.295480 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:27.525605 master-0 kubenswrapper[26425]: I0217 15:31:27.525563 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/2.log" Feb 17 15:31:27.526442 master-0 kubenswrapper[26425]: I0217 15:31:27.526412 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/cluster-policy-controller/4.log" Feb 17 15:31:27.527011 master-0 kubenswrapper[26425]: I0217 15:31:27.526977 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/1.log" Feb 17 15:31:27.528338 master-0 kubenswrapper[26425]: I0217 15:31:27.528293 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager/0.log" Feb 17 15:31:27.528395 master-0 kubenswrapper[26425]: I0217 15:31:27.528356 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="20262a51816e5646882d8f669782a57ee58ac55b3de280aac80c9b4ad5544a09" exitCode=2 Feb 17 15:31:27.528395 master-0 kubenswrapper[26425]: I0217 15:31:27.528381 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="efbfaa97348e69f0a49f3b2b302caecfbc9e14afd8c93921c11c9974de1b8c57" exitCode=0 Feb 17 15:31:27.528478 master-0 kubenswrapper[26425]: I0217 15:31:27.528397 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="4f3d983d4ccc46ef27a60861c65b81497fdb8faa3d16615f0e7d839d7e92efb0" exitCode=0 Feb 17 15:31:27.528478 master-0 kubenswrapper[26425]: I0217 15:31:27.528411 26425 generic.go:334] "Generic (PLEG): container finished" podID="27fd92ef556705625a2e4f1011322252" containerID="35fe638f6458381f305a5bf70c5f72c08dfe6647c1374e528fdd2425345b92ec" exitCode=0 Feb 17 15:31:27.528478 master-0 kubenswrapper[26425]: I0217 15:31:27.528445 26425 scope.go:117] "RemoveContainer" containerID="fcc22a077c839b880ed50e8a8777440b208baa2388423438583030d85d86b3c2" Feb 17 15:31:27.531951 master-0 kubenswrapper[26425]: I0217 15:31:27.531923 26425 generic.go:334] "Generic (PLEG): container finished" podID="6fcf23dc-fcf0-47a9-8913-13ad72185f5e" containerID="d7945383b92e3dee004f018926f5d6539c9dca46af3ccb5c1fa7f5279fe1f9e2" exitCode=0 Feb 17 15:31:27.532020 master-0 kubenswrapper[26425]: I0217 15:31:27.531957 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"6fcf23dc-fcf0-47a9-8913-13ad72185f5e","Type":"ContainerDied","Data":"d7945383b92e3dee004f018926f5d6539c9dca46af3ccb5c1fa7f5279fe1f9e2"} Feb 17 15:31:27.534870 master-0 kubenswrapper[26425]: I0217 15:31:27.534830 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="27fd92ef556705625a2e4f1011322252" podUID="c22abb517ba13d9db4b0c15e80ada3fe" Feb 17 15:31:34.203940 master-0 kubenswrapper[26425]: I0217 15:31:34.203838 26425 scope.go:117] "RemoveContainer" containerID="9e006fd864abfe5f5a71ef2226e6c0a92dd2ca3012b138b3ee0116ddfdb035e0" Feb 17 15:31:34.248088 master-0 kubenswrapper[26425]: I0217 15:31:34.248042 26425 scope.go:117] "RemoveContainer" containerID="a93de2c6661a7a022268979fd5a510b5d956da3fa477eae77c55cc327249aabd" Feb 17 15:31:34.263848 master-0 kubenswrapper[26425]: I0217 15:31:34.263796 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:31:34.274152 master-0 kubenswrapper[26425]: I0217 15:31:34.274092 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/2.log" Feb 17 15:31:34.276998 master-0 kubenswrapper[26425]: I0217 15:31:34.276930 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:34.285515 master-0 kubenswrapper[26425]: I0217 15:31:34.284788 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="27fd92ef556705625a2e4f1011322252" podUID="c22abb517ba13d9db4b0c15e80ada3fe" Feb 17 15:31:34.285991 master-0 kubenswrapper[26425]: I0217 15:31:34.285952 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:31:34.286042 master-0 kubenswrapper[26425]: I0217 15:31:34.285998 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:31:34.323943 master-0 kubenswrapper[26425]: I0217 15:31:34.323871 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") pod \"27fd92ef556705625a2e4f1011322252\" (UID: \"27fd92ef556705625a2e4f1011322252\") " Feb 17 15:31:34.324161 master-0 kubenswrapper[26425]: I0217 15:31:34.323997 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access\") pod \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " Feb 17 15:31:34.324161 master-0 kubenswrapper[26425]: I0217 15:31:34.324041 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "27fd92ef556705625a2e4f1011322252" (UID: "27fd92ef556705625a2e4f1011322252"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:34.324161 master-0 kubenswrapper[26425]: I0217 15:31:34.324127 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "6fcf23dc-fcf0-47a9-8913-13ad72185f5e" (UID: "6fcf23dc-fcf0-47a9-8913-13ad72185f5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:34.324161 master-0 kubenswrapper[26425]: I0217 15:31:34.324069 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock\") pod \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " Feb 17 15:31:34.324296 master-0 kubenswrapper[26425]: I0217 15:31:34.324253 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") pod \"27fd92ef556705625a2e4f1011322252\" (UID: \"27fd92ef556705625a2e4f1011322252\") " Feb 17 15:31:34.324296 master-0 kubenswrapper[26425]: I0217 15:31:34.324291 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir\") pod \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\" (UID: \"6fcf23dc-fcf0-47a9-8913-13ad72185f5e\") " Feb 17 15:31:34.324380 master-0 kubenswrapper[26425]: I0217 15:31:34.324354 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "27fd92ef556705625a2e4f1011322252" (UID: "27fd92ef556705625a2e4f1011322252"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:34.324510 master-0 kubenswrapper[26425]: I0217 15:31:34.324485 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6fcf23dc-fcf0-47a9-8913-13ad72185f5e" (UID: "6fcf23dc-fcf0-47a9-8913-13ad72185f5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:34.324756 master-0 kubenswrapper[26425]: I0217 15:31:34.324723 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:34.324756 master-0 kubenswrapper[26425]: I0217 15:31:34.324748 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:34.324842 master-0 kubenswrapper[26425]: I0217 15:31:34.324761 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:34.324842 master-0 kubenswrapper[26425]: I0217 15:31:34.324774 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/27fd92ef556705625a2e4f1011322252-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:34.339251 master-0 kubenswrapper[26425]: I0217 15:31:34.339194 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6fcf23dc-fcf0-47a9-8913-13ad72185f5e" (UID: "6fcf23dc-fcf0-47a9-8913-13ad72185f5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:31:34.408595 master-0 kubenswrapper[26425]: I0217 15:31:34.408548 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fd92ef556705625a2e4f1011322252" path="/var/lib/kubelet/pods/27fd92ef556705625a2e4f1011322252/volumes" Feb 17 15:31:34.425775 master-0 kubenswrapper[26425]: I0217 15:31:34.425735 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcf23dc-fcf0-47a9-8913-13ad72185f5e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:34.430213 master-0 kubenswrapper[26425]: E0217 15:31:34.430179 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle secret-alertmanager-main-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" Feb 17 15:31:34.430290 master-0 kubenswrapper[26425]: E0217 15:31:34.430242 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle secret-prometheus-k8s-thanos-sidecar-tls secret-prometheus-k8s-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" Feb 17 15:31:34.588800 master-0 kubenswrapper[26425]: I0217 15:31:34.588744 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"6fcf23dc-fcf0-47a9-8913-13ad72185f5e","Type":"ContainerDied","Data":"6e203f95d6a479e5cab09d6037a7ccb34ec5bf12bf5974d94825a27a79a69367"} Feb 17 15:31:34.589185 master-0 kubenswrapper[26425]: I0217 15:31:34.589139 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e203f95d6a479e5cab09d6037a7ccb34ec5bf12bf5974d94825a27a79a69367" Feb 17 15:31:34.589392 master-0 kubenswrapper[26425]: I0217 15:31:34.589080 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Feb 17 15:31:34.591387 master-0 kubenswrapper[26425]: I0217 15:31:34.591323 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-vtnfs" event={"ID":"ce863132-1dfd-40e4-b8df-0f699ac5f4cc","Type":"ContainerStarted","Data":"63ea32c6f4da9c077abdcdf69a20e1f91c5ce7e51c1dc1549600f3cbe1586a0b"} Feb 17 15:31:34.591903 master-0 kubenswrapper[26425]: I0217 15:31:34.591864 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:34.593576 master-0 kubenswrapper[26425]: I0217 15:31:34.593538 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_27fd92ef556705625a2e4f1011322252/kube-controller-manager-cert-syncer/2.log" Feb 17 15:31:34.594697 master-0 kubenswrapper[26425]: I0217 15:31:34.594663 26425 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-vtnfs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.101:8080/\": dial tcp 10.128.0.101:8080: connect: connection refused" start-of-body= Feb 17 15:31:34.594900 master-0 kubenswrapper[26425]: I0217 15:31:34.594863 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-vtnfs" podUID="ce863132-1dfd-40e4-b8df-0f699ac5f4cc" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.101:8080/\": dial tcp 10.128.0.101:8080: connect: connection refused" Feb 17 15:31:34.597305 master-0 kubenswrapper[26425]: I0217 15:31:34.597258 26425 scope.go:117] "RemoveContainer" containerID="20262a51816e5646882d8f669782a57ee58ac55b3de280aac80c9b4ad5544a09" Feb 17 15:31:34.597407 master-0 kubenswrapper[26425]: I0217 15:31:34.597271 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:34.597407 master-0 kubenswrapper[26425]: I0217 15:31:34.597353 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:34.597535 master-0 kubenswrapper[26425]: I0217 15:31:34.597365 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:34.614286 master-0 kubenswrapper[26425]: I0217 15:31:34.614207 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="27fd92ef556705625a2e4f1011322252" podUID="c22abb517ba13d9db4b0c15e80ada3fe" Feb 17 15:31:34.617983 master-0 kubenswrapper[26425]: I0217 15:31:34.617918 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-dcd7b7d95-vtnfs" podStartSLOduration=1.497803841 podStartE2EDuration="34.617902578s" podCreationTimestamp="2026-02-17 15:31:00 +0000 UTC" firstStartedPulling="2026-02-17 15:31:01.285920069 +0000 UTC m=+923.177643887" lastFinishedPulling="2026-02-17 15:31:34.406018766 +0000 UTC m=+956.297742624" observedRunningTime="2026-02-17 15:31:34.608771139 +0000 UTC m=+956.500495007" watchObservedRunningTime="2026-02-17 15:31:34.617902578 +0000 UTC m=+956.509626396" Feb 17 15:31:34.621200 master-0 kubenswrapper[26425]: I0217 15:31:34.620712 26425 scope.go:117] "RemoveContainer" containerID="efbfaa97348e69f0a49f3b2b302caecfbc9e14afd8c93921c11c9974de1b8c57" Feb 17 15:31:34.637535 master-0 kubenswrapper[26425]: I0217 15:31:34.637489 26425 scope.go:117] "RemoveContainer" containerID="4f3d983d4ccc46ef27a60861c65b81497fdb8faa3d16615f0e7d839d7e92efb0" Feb 17 15:31:34.654426 master-0 kubenswrapper[26425]: I0217 15:31:34.654318 26425 scope.go:117] "RemoveContainer" containerID="35fe638f6458381f305a5bf70c5f72c08dfe6647c1374e528fdd2425345b92ec" Feb 17 15:31:34.967374 master-0 kubenswrapper[26425]: I0217 15:31:34.967206 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:31:34.967374 master-0 kubenswrapper[26425]: I0217 15:31:34.967283 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:31:35.607106 master-0 kubenswrapper[26425]: I0217 15:31:35.607034 26425 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-vtnfs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.101:8080/\": dial tcp 10.128.0.101:8080: connect: connection refused" start-of-body= Feb 17 15:31:35.607639 master-0 kubenswrapper[26425]: I0217 15:31:35.607136 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-vtnfs" podUID="ce863132-1dfd-40e4-b8df-0f699ac5f4cc" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.101:8080/\": dial tcp 10.128.0.101:8080: connect: connection refused" Feb 17 15:31:37.394311 master-0 kubenswrapper[26425]: I0217 15:31:37.394219 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:37.429554 master-0 kubenswrapper[26425]: I0217 15:31:37.429492 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3159f6ee-0079-4cee-b74f-dd37c852cf2a" Feb 17 15:31:37.429554 master-0 kubenswrapper[26425]: I0217 15:31:37.429530 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3159f6ee-0079-4cee-b74f-dd37c852cf2a" Feb 17 15:31:37.447274 master-0 kubenswrapper[26425]: I0217 15:31:37.447186 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:37.464660 master-0 kubenswrapper[26425]: I0217 15:31:37.462144 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:31:37.464660 master-0 kubenswrapper[26425]: I0217 15:31:37.464479 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:31:37.476197 master-0 kubenswrapper[26425]: I0217 15:31:37.476133 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:37.481061 master-0 kubenswrapper[26425]: I0217 15:31:37.480943 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:31:37.510398 master-0 kubenswrapper[26425]: W0217 15:31:37.510317 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd WatchSource:0}: Error finding container f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd: Status 404 returned error can't find the container with id f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd Feb 17 15:31:37.623483 master-0 kubenswrapper[26425]: I0217 15:31:37.623411 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c22abb517ba13d9db4b0c15e80ada3fe","Type":"ContainerStarted","Data":"f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd"} Feb 17 15:31:38.530053 master-0 kubenswrapper[26425]: I0217 15:31:38.529427 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:38.530940 master-0 kubenswrapper[26425]: I0217 15:31:38.530083 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.530940 master-0 kubenswrapper[26425]: I0217 15:31:38.530109 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:38.530940 master-0 kubenswrapper[26425]: I0217 15:31:38.530162 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.530940 master-0 kubenswrapper[26425]: I0217 15:31:38.530711 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.531708 master-0 kubenswrapper[26425]: I0217 15:31:38.531654 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:38.531942 master-0 kubenswrapper[26425]: I0217 15:31:38.531884 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.533784 master-0 kubenswrapper[26425]: I0217 15:31:38.533743 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.534388 master-0 kubenswrapper[26425]: I0217 15:31:38.534334 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:38.542380 master-0 kubenswrapper[26425]: I0217 15:31:38.542327 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:38.633942 master-0 kubenswrapper[26425]: I0217 15:31:38.633878 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c22abb517ba13d9db4b0c15e80ada3fe","Type":"ContainerStarted","Data":"a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e"} Feb 17 15:31:38.800896 master-0 kubenswrapper[26425]: I0217 15:31:38.800744 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pv4xc" Feb 17 15:31:38.804246 master-0 kubenswrapper[26425]: I0217 15:31:38.804162 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2tsl8" Feb 17 15:31:38.809333 master-0 kubenswrapper[26425]: I0217 15:31:38.809276 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:31:38.809838 master-0 kubenswrapper[26425]: I0217 15:31:38.809755 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:31:39.364057 master-0 kubenswrapper[26425]: I0217 15:31:39.363562 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:31:39.382620 master-0 kubenswrapper[26425]: W0217 15:31:39.382560 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7284bcca_864c_40df_b7dc_9aecf470697a.slice/crio-5a690802a3c326e6a43b2e97f56648c461496fb55c540faca512821923c9d07c WatchSource:0}: Error finding container 5a690802a3c326e6a43b2e97f56648c461496fb55c540faca512821923c9d07c: Status 404 returned error can't find the container with id 5a690802a3c326e6a43b2e97f56648c461496fb55c540faca512821923c9d07c Feb 17 15:31:39.515080 master-0 kubenswrapper[26425]: I0217 15:31:39.515016 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:31:39.543395 master-0 kubenswrapper[26425]: W0217 15:31:39.543300 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1115aa66_7b5c_4863_aa91_b28baff7e922.slice/crio-3dda46c86732a971ca11da424c7442ca446195bdca599d8c908ab71c564b253e WatchSource:0}: Error finding container 3dda46c86732a971ca11da424c7442ca446195bdca599d8c908ab71c564b253e: Status 404 returned error can't find the container with id 3dda46c86732a971ca11da424c7442ca446195bdca599d8c908ab71c564b253e Feb 17 15:31:39.645217 master-0 kubenswrapper[26425]: I0217 15:31:39.645149 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c22abb517ba13d9db4b0c15e80ada3fe","Type":"ContainerStarted","Data":"83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6"} Feb 17 15:31:39.645217 master-0 kubenswrapper[26425]: I0217 15:31:39.645201 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c22abb517ba13d9db4b0c15e80ada3fe","Type":"ContainerStarted","Data":"a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009"} Feb 17 15:31:39.645217 master-0 kubenswrapper[26425]: I0217 15:31:39.645212 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c22abb517ba13d9db4b0c15e80ada3fe","Type":"ContainerStarted","Data":"2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d"} Feb 17 15:31:39.646115 master-0 kubenswrapper[26425]: I0217 15:31:39.646065 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"5a690802a3c326e6a43b2e97f56648c461496fb55c540faca512821923c9d07c"} Feb 17 15:31:39.646865 master-0 kubenswrapper[26425]: I0217 15:31:39.646830 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"3dda46c86732a971ca11da424c7442ca446195bdca599d8c908ab71c564b253e"} Feb 17 15:31:40.657380 master-0 kubenswrapper[26425]: I0217 15:31:40.657296 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" exitCode=0 Feb 17 15:31:40.657380 master-0 kubenswrapper[26425]: I0217 15:31:40.657381 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} Feb 17 15:31:40.662479 master-0 kubenswrapper[26425]: I0217 15:31:40.662391 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="658ac603e541dd9359651742b5c146fca91edeacc594e7f8c19fa744fb622d49" exitCode=0 Feb 17 15:31:40.662654 master-0 kubenswrapper[26425]: I0217 15:31:40.662574 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"658ac603e541dd9359651742b5c146fca91edeacc594e7f8c19fa744fb622d49"} Feb 17 15:31:40.830503 master-0 kubenswrapper[26425]: I0217 15:31:40.830383 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-vtnfs" Feb 17 15:31:42.276165 master-0 kubenswrapper[26425]: I0217 15:31:42.276080 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=5.27605852 podStartE2EDuration="5.27605852s" podCreationTimestamp="2026-02-17 15:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:31:42.274928352 +0000 UTC m=+964.166652170" watchObservedRunningTime="2026-02-17 15:31:42.27605852 +0000 UTC m=+964.167782338" Feb 17 15:31:42.596533 master-0 kubenswrapper[26425]: I0217 15:31:42.596406 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:31:42.596792 master-0 kubenswrapper[26425]: I0217 15:31:42.596743 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" containerID="cri-o://39d90e2b00141a0c491cc3ec8392a600a6a01595195a3aac176f6c4f99d06ad8" gracePeriod=15 Feb 17 15:31:42.596847 master-0 kubenswrapper[26425]: I0217 15:31:42.596780 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ea8fbc46bfc67699ac8dc3657e5080093940cd8742c87627ba3d795ee12841ab" gracePeriod=15 Feb 17 15:31:42.596907 master-0 kubenswrapper[26425]: I0217 15:31:42.596859 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" containerID="cri-o://68a438a4e14f80804f842c0c44dfda76c0251a3c52afe081bbd14694a703898a" gracePeriod=15 Feb 17 15:31:42.596907 master-0 kubenswrapper[26425]: I0217 15:31:42.596903 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0a6f90db7355282c99c29dbf0363e0633a9d55c0e8f232d859147cef7d241a54" gracePeriod=15 Feb 17 15:31:42.596990 master-0 kubenswrapper[26425]: I0217 15:31:42.596934 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0f85b3342f5b9ee3681b487c6f9af1503246e3aa95e4fcb3fbc34dc5c76ae7fa" gracePeriod=15 Feb 17 15:31:42.597823 master-0 kubenswrapper[26425]: I0217 15:31:42.597645 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:31:42.597982 master-0 kubenswrapper[26425]: E0217 15:31:42.597960 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 17 15:31:42.598032 master-0 kubenswrapper[26425]: I0217 15:31:42.597981 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 17 15:31:42.598032 master-0 kubenswrapper[26425]: E0217 15:31:42.597998 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcf23dc-fcf0-47a9-8913-13ad72185f5e" containerName="installer" Feb 17 15:31:42.598032 master-0 kubenswrapper[26425]: I0217 15:31:42.598004 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcf23dc-fcf0-47a9-8913-13ad72185f5e" containerName="installer" Feb 17 15:31:42.598032 master-0 kubenswrapper[26425]: E0217 15:31:42.598022 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598032 master-0 kubenswrapper[26425]: I0217 15:31:42.598030 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: E0217 15:31:42.598055 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: I0217 15:31:42.598064 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: E0217 15:31:42.598079 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: I0217 15:31:42.598085 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: E0217 15:31:42.598095 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: I0217 15:31:42.598101 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: E0217 15:31:42.598117 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="setup" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: I0217 15:31:42.598124 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="setup" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: E0217 15:31:42.598143 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 17 15:31:42.598170 master-0 kubenswrapper[26425]: I0217 15:31:42.598149 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598289 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598315 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598327 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598352 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598368 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcf23dc-fcf0-47a9-8913-13ad72185f5e" containerName="installer" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598378 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 17 15:31:42.598550 master-0 kubenswrapper[26425]: I0217 15:31:42.598406 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 17 15:31:42.600000 master-0 kubenswrapper[26425]: I0217 15:31:42.599971 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:31:42.600589 master-0 kubenswrapper[26425]: I0217 15:31:42.600560 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.712820 master-0 kubenswrapper[26425]: I0217 15:31:42.712774 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.712820 master-0 kubenswrapper[26425]: I0217 15:31:42.712817 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.712865 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.712883 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.712906 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.712924 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.712990 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.713082 master-0 kubenswrapper[26425]: I0217 15:31:42.713006 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.792728 master-0 kubenswrapper[26425]: I0217 15:31:42.786376 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="619e637b8575311b72d43b7b782d610a" podUID="10e298020284b0e8ffa6a0bc184059d9" Feb 17 15:31:42.814232 master-0 kubenswrapper[26425]: I0217 15:31:42.814185 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814333 master-0 kubenswrapper[26425]: I0217 15:31:42.814233 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.814333 master-0 kubenswrapper[26425]: I0217 15:31:42.814305 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.814425 master-0 kubenswrapper[26425]: I0217 15:31:42.814332 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814425 master-0 kubenswrapper[26425]: I0217 15:31:42.814339 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814425 master-0 kubenswrapper[26425]: I0217 15:31:42.814369 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814580 master-0 kubenswrapper[26425]: I0217 15:31:42.814429 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814580 master-0 kubenswrapper[26425]: I0217 15:31:42.814439 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.814745 master-0 kubenswrapper[26425]: I0217 15:31:42.814649 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814745 master-0 kubenswrapper[26425]: I0217 15:31:42.814726 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814745 master-0 kubenswrapper[26425]: I0217 15:31:42.814711 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.814882 master-0 kubenswrapper[26425]: I0217 15:31:42.814669 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.815075 master-0 kubenswrapper[26425]: I0217 15:31:42.815033 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.815075 master-0 kubenswrapper[26425]: I0217 15:31:42.815053 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:42.815177 master-0 kubenswrapper[26425]: I0217 15:31:42.815085 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.815230 master-0 kubenswrapper[26425]: I0217 15:31:42.815195 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.839796 master-0 kubenswrapper[26425]: I0217 15:31:42.839716 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:31:42.854433 master-0 kubenswrapper[26425]: I0217 15:31:42.854368 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:31:43.686885 master-0 kubenswrapper[26425]: I0217 15:31:43.686762 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 17 15:31:43.687418 master-0 kubenswrapper[26425]: I0217 15:31:43.687385 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="ea8fbc46bfc67699ac8dc3657e5080093940cd8742c87627ba3d795ee12841ab" exitCode=0 Feb 17 15:31:43.687418 master-0 kubenswrapper[26425]: I0217 15:31:43.687408 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="68a438a4e14f80804f842c0c44dfda76c0251a3c52afe081bbd14694a703898a" exitCode=0 Feb 17 15:31:43.687418 master-0 kubenswrapper[26425]: I0217 15:31:43.687416 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="0a6f90db7355282c99c29dbf0363e0633a9d55c0e8f232d859147cef7d241a54" exitCode=0 Feb 17 15:31:43.687574 master-0 kubenswrapper[26425]: I0217 15:31:43.687424 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="0f85b3342f5b9ee3681b487c6f9af1503246e3aa95e4fcb3fbc34dc5c76ae7fa" exitCode=2 Feb 17 15:31:43.687574 master-0 kubenswrapper[26425]: I0217 15:31:43.687500 26425 scope.go:117] "RemoveContainer" containerID="88cbd41012314cb9ee211332196a857cc4bf4c35b6149a5c3069d9a70f29b51a" Feb 17 15:31:43.689309 master-0 kubenswrapper[26425]: I0217 15:31:43.689272 26425 generic.go:334] "Generic (PLEG): container finished" podID="78028ec2-59c0-459d-b148-e84842b5aea8" containerID="0872c44be2b2c46697b1111d6bcc7da9349617a884aa419f69046c275a840215" exitCode=0 Feb 17 15:31:43.689375 master-0 kubenswrapper[26425]: I0217 15:31:43.689330 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"78028ec2-59c0-459d-b148-e84842b5aea8","Type":"ContainerDied","Data":"0872c44be2b2c46697b1111d6bcc7da9349617a884aa419f69046c275a840215"} Feb 17 15:31:43.690481 master-0 kubenswrapper[26425]: I0217 15:31:43.690420 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:43.690842 master-0 kubenswrapper[26425]: I0217 15:31:43.690769 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3"} Feb 17 15:31:43.690892 master-0 kubenswrapper[26425]: I0217 15:31:43.690850 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"1bb43b1748b512fef857ded88d9d725301efdf2411c83f5c3589696bac8839cf"} Feb 17 15:31:43.691728 master-0 kubenswrapper[26425]: I0217 15:31:43.691683 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:43.692193 master-0 kubenswrapper[26425]: I0217 15:31:43.692157 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:43.692620 master-0 kubenswrapper[26425]: I0217 15:31:43.692591 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.286944 master-0 kubenswrapper[26425]: I0217 15:31:44.286848 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:31:44.286944 master-0 kubenswrapper[26425]: I0217 15:31:44.286918 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:31:44.967270 master-0 kubenswrapper[26425]: I0217 15:31:44.967097 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:31:44.967270 master-0 kubenswrapper[26425]: I0217 15:31:44.967157 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:31:44.996255 master-0 kubenswrapper[26425]: E0217 15:31:44.996160 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.996885 master-0 kubenswrapper[26425]: E0217 15:31:44.996824 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.997745 master-0 kubenswrapper[26425]: E0217 15:31:44.997531 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.998241 master-0 kubenswrapper[26425]: E0217 15:31:44.998181 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.998954 master-0 kubenswrapper[26425]: E0217 15:31:44.998887 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:44.998954 master-0 kubenswrapper[26425]: I0217 15:31:44.998935 26425 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:31:44.999608 master-0 kubenswrapper[26425]: E0217 15:31:44.999550 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 17 15:31:45.201604 master-0 kubenswrapper[26425]: E0217 15:31:45.201437 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 17 15:31:45.603932 master-0 kubenswrapper[26425]: E0217 15:31:45.603802 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 17 15:31:45.830440 master-0 kubenswrapper[26425]: E0217 15:31:45.821955 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38\\\"],\\\"sizeBytes\\\":2890715256},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e90d0a6840e7f67900c763906a0628ddf209cb666c54c2dda0f4a84964a5cec\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c71d0b62dff668e0f4be49e4976deda87032ae569a87f53898bd9e5489d8a621\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\"],\\\"sizeBytes\\\":857023173},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2\\\"],\\\"sizeBytes\\\":628694305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\"],\\\"sizeBytes\\\":507065596},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9\\\"],\\\"sizeBytes\\\":497535620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\\\"],\\\"sizeBytes\\\":481879166}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:45.835224 master-0 kubenswrapper[26425]: E0217 15:31:45.831903 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:45.839619 master-0 kubenswrapper[26425]: E0217 15:31:45.836779 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:45.849192 master-0 kubenswrapper[26425]: E0217 15:31:45.844276 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:45.853021 master-0 kubenswrapper[26425]: E0217 15:31:45.852803 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:45.853021 master-0 kubenswrapper[26425]: E0217 15:31:45.852841 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:31:46.409633 master-0 kubenswrapper[26425]: E0217 15:31:46.409493 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 17 15:31:46.740493 master-0 kubenswrapper[26425]: I0217 15:31:46.740346 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 17 15:31:46.741778 master-0 kubenswrapper[26425]: I0217 15:31:46.741744 26425 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="39d90e2b00141a0c491cc3ec8392a600a6a01595195a3aac176f6c4f99d06ad8" exitCode=0 Feb 17 15:31:47.477173 master-0 kubenswrapper[26425]: I0217 15:31:47.477081 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.477173 master-0 kubenswrapper[26425]: I0217 15:31:47.477154 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.477173 master-0 kubenswrapper[26425]: I0217 15:31:47.477175 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.477939 master-0 kubenswrapper[26425]: I0217 15:31:47.477194 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.482032 master-0 kubenswrapper[26425]: I0217 15:31:47.481989 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.482591 master-0 kubenswrapper[26425]: I0217 15:31:47.482549 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.483697 master-0 kubenswrapper[26425]: I0217 15:31:47.483623 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.484401 master-0 kubenswrapper[26425]: I0217 15:31:47.484338 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.485211 master-0 kubenswrapper[26425]: I0217 15:31:47.485147 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.485979 master-0 kubenswrapper[26425]: I0217 15:31:47.485917 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.486695 master-0 kubenswrapper[26425]: I0217 15:31:47.486641 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.487353 master-0 kubenswrapper[26425]: I0217 15:31:47.487289 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.759222 master-0 kubenswrapper[26425]: I0217 15:31:47.759070 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:47.760672 master-0 kubenswrapper[26425]: I0217 15:31:47.760558 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.761713 master-0 kubenswrapper[26425]: I0217 15:31:47.761631 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.762698 master-0 kubenswrapper[26425]: I0217 15:31:47.762620 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:47.946695 master-0 kubenswrapper[26425]: E0217 15:31:47.946423 26425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189512708664ddab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,LastTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:31:48.014948 master-0 kubenswrapper[26425]: E0217 15:31:48.011926 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 17 15:31:48.406826 master-0 kubenswrapper[26425]: I0217 15:31:48.406689 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.408171 master-0 kubenswrapper[26425]: I0217 15:31:48.408073 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.409362 master-0 kubenswrapper[26425]: I0217 15:31:48.409231 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.767033 master-0 kubenswrapper[26425]: I0217 15:31:48.766889 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:31:48.768341 master-0 kubenswrapper[26425]: I0217 15:31:48.768029 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.769601 master-0 kubenswrapper[26425]: I0217 15:31:48.768872 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.770133 master-0 kubenswrapper[26425]: I0217 15:31:48.769904 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:48.989239 master-0 kubenswrapper[26425]: E0217 15:31:48.989057 26425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189512708664ddab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,LastTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:31:49.040681 master-0 kubenswrapper[26425]: I0217 15:31:49.040561 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:49.041284 master-0 kubenswrapper[26425]: I0217 15:31:49.041248 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:49.042041 master-0 kubenswrapper[26425]: I0217 15:31:49.042004 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:49.042696 master-0 kubenswrapper[26425]: I0217 15:31:49.042652 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:49.135830 master-0 kubenswrapper[26425]: I0217 15:31:49.135781 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access\") pod \"78028ec2-59c0-459d-b148-e84842b5aea8\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " Feb 17 15:31:49.136051 master-0 kubenswrapper[26425]: I0217 15:31:49.135905 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock\") pod \"78028ec2-59c0-459d-b148-e84842b5aea8\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " Feb 17 15:31:49.136051 master-0 kubenswrapper[26425]: I0217 15:31:49.135980 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir\") pod \"78028ec2-59c0-459d-b148-e84842b5aea8\" (UID: \"78028ec2-59c0-459d-b148-e84842b5aea8\") " Feb 17 15:31:49.136051 master-0 kubenswrapper[26425]: I0217 15:31:49.135987 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock" (OuterVolumeSpecName: "var-lock") pod "78028ec2-59c0-459d-b148-e84842b5aea8" (UID: "78028ec2-59c0-459d-b148-e84842b5aea8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:49.136221 master-0 kubenswrapper[26425]: I0217 15:31:49.136194 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78028ec2-59c0-459d-b148-e84842b5aea8" (UID: "78028ec2-59c0-459d-b148-e84842b5aea8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:49.136410 master-0 kubenswrapper[26425]: I0217 15:31:49.136379 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:49.136410 master-0 kubenswrapper[26425]: I0217 15:31:49.136400 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78028ec2-59c0-459d-b148-e84842b5aea8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:49.140929 master-0 kubenswrapper[26425]: I0217 15:31:49.140882 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78028ec2-59c0-459d-b148-e84842b5aea8" (UID: "78028ec2-59c0-459d-b148-e84842b5aea8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:31:49.239306 master-0 kubenswrapper[26425]: I0217 15:31:49.239036 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78028ec2-59c0-459d-b148-e84842b5aea8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:49.774687 master-0 kubenswrapper[26425]: I0217 15:31:49.774639 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 17 15:31:49.778144 master-0 kubenswrapper[26425]: I0217 15:31:49.777634 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 17 15:31:49.788595 master-0 kubenswrapper[26425]: I0217 15:31:49.788516 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"78028ec2-59c0-459d-b148-e84842b5aea8","Type":"ContainerDied","Data":"c077e376a933343ed0b15736f5f88ca9163435967caafc4f5f0b0a3c6e77b1d0"} Feb 17 15:31:49.788595 master-0 kubenswrapper[26425]: I0217 15:31:49.788557 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c077e376a933343ed0b15736f5f88ca9163435967caafc4f5f0b0a3c6e77b1d0" Feb 17 15:31:49.802573 master-0 kubenswrapper[26425]: I0217 15:31:49.802431 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:49.803376 master-0 kubenswrapper[26425]: I0217 15:31:49.803281 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:49.804443 master-0 kubenswrapper[26425]: I0217 15:31:49.804394 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:50.911855 master-0 kubenswrapper[26425]: I0217 15:31:50.911775 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 17 15:31:50.914342 master-0 kubenswrapper[26425]: I0217 15:31:50.914287 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:50.915809 master-0 kubenswrapper[26425]: I0217 15:31:50.915715 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:50.916891 master-0 kubenswrapper[26425]: I0217 15:31:50.916814 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:50.918097 master-0 kubenswrapper[26425]: I0217 15:31:50.918030 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:50.919005 master-0 kubenswrapper[26425]: I0217 15:31:50.918941 26425 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:50.973176 master-0 kubenswrapper[26425]: I0217 15:31:50.973023 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 17 15:31:50.973570 master-0 kubenswrapper[26425]: I0217 15:31:50.973239 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 17 15:31:50.973570 master-0 kubenswrapper[26425]: I0217 15:31:50.973295 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 17 15:31:50.973570 master-0 kubenswrapper[26425]: I0217 15:31:50.973483 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:50.973873 master-0 kubenswrapper[26425]: I0217 15:31:50.973574 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:50.973873 master-0 kubenswrapper[26425]: I0217 15:31:50.973797 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:31:50.973873 master-0 kubenswrapper[26425]: I0217 15:31:50.973825 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:50.973873 master-0 kubenswrapper[26425]: I0217 15:31:50.973849 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:51.075163 master-0 kubenswrapper[26425]: I0217 15:31:51.075029 26425 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:31:51.214617 master-0 kubenswrapper[26425]: E0217 15:31:51.214480 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 17 15:31:51.815641 master-0 kubenswrapper[26425]: I0217 15:31:51.815533 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 17 15:31:51.817064 master-0 kubenswrapper[26425]: I0217 15:31:51.816994 26425 scope.go:117] "RemoveContainer" containerID="ea8fbc46bfc67699ac8dc3657e5080093940cd8742c87627ba3d795ee12841ab" Feb 17 15:31:51.817064 master-0 kubenswrapper[26425]: I0217 15:31:51.817060 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:51.820751 master-0 kubenswrapper[26425]: I0217 15:31:51.820680 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} Feb 17 15:31:51.856246 master-0 kubenswrapper[26425]: I0217 15:31:51.856134 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:51.856955 master-0 kubenswrapper[26425]: I0217 15:31:51.856859 26425 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:51.857992 master-0 kubenswrapper[26425]: I0217 15:31:51.857856 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:51.859717 master-0 kubenswrapper[26425]: I0217 15:31:51.859624 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:51.862064 master-0 kubenswrapper[26425]: I0217 15:31:51.861660 26425 scope.go:117] "RemoveContainer" containerID="68a438a4e14f80804f842c0c44dfda76c0251a3c52afe081bbd14694a703898a" Feb 17 15:31:51.886066 master-0 kubenswrapper[26425]: I0217 15:31:51.885966 26425 scope.go:117] "RemoveContainer" containerID="0a6f90db7355282c99c29dbf0363e0633a9d55c0e8f232d859147cef7d241a54" Feb 17 15:31:51.910109 master-0 kubenswrapper[26425]: I0217 15:31:51.910021 26425 scope.go:117] "RemoveContainer" containerID="0f85b3342f5b9ee3681b487c6f9af1503246e3aa95e4fcb3fbc34dc5c76ae7fa" Feb 17 15:31:51.931231 master-0 kubenswrapper[26425]: I0217 15:31:51.931167 26425 scope.go:117] "RemoveContainer" containerID="39d90e2b00141a0c491cc3ec8392a600a6a01595195a3aac176f6c4f99d06ad8" Feb 17 15:31:51.966740 master-0 kubenswrapper[26425]: I0217 15:31:51.966691 26425 scope.go:117] "RemoveContainer" containerID="2128d8d38323586ed6d9716f5c0be6569fe807cb8c9948bb819a8f728039d87d" Feb 17 15:31:52.411826 master-0 kubenswrapper[26425]: I0217 15:31:52.411763 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619e637b8575311b72d43b7b782d610a" path="/var/lib/kubelet/pods/619e637b8575311b72d43b7b782d610a/volumes" Feb 17 15:31:52.843657 master-0 kubenswrapper[26425]: I0217 15:31:52.843556 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"4447ceb23c1d4facb08760700abd426c411bbf6b4811632582d89ef957716e66"} Feb 17 15:31:54.285872 master-0 kubenswrapper[26425]: I0217 15:31:54.285797 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:31:54.285872 master-0 kubenswrapper[26425]: I0217 15:31:54.285869 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:31:54.913313 master-0 kubenswrapper[26425]: I0217 15:31:54.913207 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} Feb 17 15:31:54.917980 master-0 kubenswrapper[26425]: I0217 15:31:54.917929 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"5b880952e43c162fdf7249d632e1b7db55215a5ce8dea0be9d7f9249af484e1b"} Feb 17 15:31:54.966943 master-0 kubenswrapper[26425]: I0217 15:31:54.966790 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:31:54.966943 master-0 kubenswrapper[26425]: I0217 15:31:54.966873 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:31:55.928060 master-0 kubenswrapper[26425]: I0217 15:31:55.927973 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} Feb 17 15:31:55.932347 master-0 kubenswrapper[26425]: I0217 15:31:55.932280 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"755bcfc2451098b86204efb1064608fc839aaba5498c364378fe3e4492975625"} Feb 17 15:31:55.977364 master-0 kubenswrapper[26425]: E0217 15:31:55.977112 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:31:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38\\\"],\\\"sizeBytes\\\":2890715256},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e90d0a6840e7f67900c763906a0628ddf209cb666c54c2dda0f4a84964a5cec\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c71d0b62dff668e0f4be49e4976deda87032ae569a87f53898bd9e5489d8a621\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701476551},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\"],\\\"sizeBytes\\\":857023173},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2\\\"],\\\"sizeBytes\\\":628694305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\"],\\\"sizeBytes\\\":507065596},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9\\\"],\\\"sizeBytes\\\":497535620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\\\"],\\\"sizeBytes\\\":481879166}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:55.978436 master-0 kubenswrapper[26425]: E0217 15:31:55.978387 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:55.979005 master-0 kubenswrapper[26425]: E0217 15:31:55.978975 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:55.979650 master-0 kubenswrapper[26425]: E0217 15:31:55.979597 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:55.980260 master-0 kubenswrapper[26425]: E0217 15:31:55.980207 26425 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:55.980260 master-0 kubenswrapper[26425]: E0217 15:31:55.980245 26425 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:31:56.947163 master-0 kubenswrapper[26425]: I0217 15:31:56.946984 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} Feb 17 15:31:56.955171 master-0 kubenswrapper[26425]: I0217 15:31:56.955089 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"30845f09794de19ccb491a056c81a6e3440a61b00911226c4004f95138579471"} Feb 17 15:31:57.616731 master-0 kubenswrapper[26425]: E0217 15:31:57.616604 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 17 15:31:57.967594 master-0 kubenswrapper[26425]: I0217 15:31:57.967315 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} Feb 17 15:31:58.394717 master-0 kubenswrapper[26425]: I0217 15:31:58.394646 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:58.415196 master-0 kubenswrapper[26425]: I0217 15:31:58.415073 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.416254 master-0 kubenswrapper[26425]: I0217 15:31:58.416167 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.417217 master-0 kubenswrapper[26425]: I0217 15:31:58.417124 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.418745 master-0 kubenswrapper[26425]: I0217 15:31:58.418681 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.419937 master-0 kubenswrapper[26425]: I0217 15:31:58.419850 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.422092 master-0 kubenswrapper[26425]: I0217 15:31:58.420839 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:31:58.455896 master-0 kubenswrapper[26425]: I0217 15:31:58.455815 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:31:58.455896 master-0 kubenswrapper[26425]: I0217 15:31:58.455874 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:31:58.457203 master-0 kubenswrapper[26425]: E0217 15:31:58.457127 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:58.457987 master-0 kubenswrapper[26425]: I0217 15:31:58.457936 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:31:58.496187 master-0 kubenswrapper[26425]: W0217 15:31:58.496095 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e298020284b0e8ffa6a0bc184059d9.slice/crio-806b8d9da9df4d41a3bfcb991da9e314a084d79a5872f12403446e30cb0fd4a3 WatchSource:0}: Error finding container 806b8d9da9df4d41a3bfcb991da9e314a084d79a5872f12403446e30cb0fd4a3: Status 404 returned error can't find the container with id 806b8d9da9df4d41a3bfcb991da9e314a084d79a5872f12403446e30cb0fd4a3 Feb 17 15:31:58.980228 master-0 kubenswrapper[26425]: I0217 15:31:58.980152 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"806b8d9da9df4d41a3bfcb991da9e314a084d79a5872f12403446e30cb0fd4a3"} Feb 17 15:31:58.985289 master-0 kubenswrapper[26425]: I0217 15:31:58.985251 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"c17e6e0ffb2100550235ef51822ac385fadd80df618190dad159ce0d25c6aeda"} Feb 17 15:31:58.990829 master-0 kubenswrapper[26425]: E0217 15:31:58.990641 26425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189512708664ddab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,LastTimestamp:2026-02-17 15:31:42.940552619 +0000 UTC m=+964.832276457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:32:00.000192 master-0 kubenswrapper[26425]: I0217 15:31:59.999959 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd"} Feb 17 15:32:01.022172 master-0 kubenswrapper[26425]: I0217 15:32:01.021956 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerStarted","Data":"3bef16d6a5c7c4c3b645d3c355aa1a41faba5d711790e01525694cbdeb738180"} Feb 17 15:32:01.025212 master-0 kubenswrapper[26425]: I0217 15:32:01.025140 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd" exitCode=0 Feb 17 15:32:01.025329 master-0 kubenswrapper[26425]: I0217 15:32:01.025264 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd"} Feb 17 15:32:01.025862 master-0 kubenswrapper[26425]: I0217 15:32:01.025807 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:01.025967 master-0 kubenswrapper[26425]: I0217 15:32:01.025892 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:01.026992 master-0 kubenswrapper[26425]: I0217 15:32:01.026925 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.028042 master-0 kubenswrapper[26425]: I0217 15:32:01.027840 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.028325 master-0 kubenswrapper[26425]: E0217 15:32:01.028215 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:01.029362 master-0 kubenswrapper[26425]: I0217 15:32:01.029282 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.031249 master-0 kubenswrapper[26425]: I0217 15:32:01.031188 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerStarted","Data":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} Feb 17 15:32:01.032659 master-0 kubenswrapper[26425]: I0217 15:32:01.032555 26425 status_manager.go:851] "Failed to get status for pod" podUID="c22abb517ba13d9db4b0c15e80ada3fe" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.033596 master-0 kubenswrapper[26425]: I0217 15:32:01.033517 26425 status_manager.go:851] "Failed to get status for pod" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.034614 master-0 kubenswrapper[26425]: I0217 15:32:01.034549 26425 status_manager.go:851] "Failed to get status for pod" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:01.035402 master-0 kubenswrapper[26425]: I0217 15:32:01.035332 26425 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:32:02.058181 master-0 kubenswrapper[26425]: I0217 15:32:02.058055 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d"} Feb 17 15:32:03.810221 master-0 kubenswrapper[26425]: I0217 15:32:03.810178 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:32:04.078378 master-0 kubenswrapper[26425]: I0217 15:32:04.078332 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f"} Feb 17 15:32:04.078378 master-0 kubenswrapper[26425]: I0217 15:32:04.078373 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb"} Feb 17 15:32:04.078378 master-0 kubenswrapper[26425]: I0217 15:32:04.078384 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a"} Feb 17 15:32:04.285995 master-0 kubenswrapper[26425]: I0217 15:32:04.285812 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:04.285995 master-0 kubenswrapper[26425]: I0217 15:32:04.285879 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:04.967819 master-0 kubenswrapper[26425]: I0217 15:32:04.967755 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:04.968294 master-0 kubenswrapper[26425]: I0217 15:32:04.967833 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:05.094200 master-0 kubenswrapper[26425]: I0217 15:32:05.094141 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b"} Feb 17 15:32:05.094552 master-0 kubenswrapper[26425]: I0217 15:32:05.094523 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:05.094707 master-0 kubenswrapper[26425]: I0217 15:32:05.094655 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:05.095079 master-0 kubenswrapper[26425]: I0217 15:32:05.095053 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:05.108002 master-0 kubenswrapper[26425]: I0217 15:32:05.107977 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:06.106080 master-0 kubenswrapper[26425]: I0217 15:32:06.105996 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:06.106080 master-0 kubenswrapper[26425]: I0217 15:32:06.106046 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:06.690521 master-0 kubenswrapper[26425]: I0217 15:32:06.690377 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:32:06.690807 master-0 kubenswrapper[26425]: E0217 15:32:06.690710 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:32:06.690807 master-0 kubenswrapper[26425]: E0217 15:32:06.690771 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:32:06.690948 master-0 kubenswrapper[26425]: E0217 15:32:06.690879 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:34:08.690844718 +0000 UTC m=+1110.582568576 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:32:08.458939 master-0 kubenswrapper[26425]: I0217 15:32:08.458792 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:08.459861 master-0 kubenswrapper[26425]: I0217 15:32:08.459054 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:08.461752 master-0 kubenswrapper[26425]: I0217 15:32:08.461695 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:08.461752 master-0 kubenswrapper[26425]: I0217 15:32:08.461745 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:08.467083 master-0 kubenswrapper[26425]: I0217 15:32:08.467004 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:08.879098 master-0 kubenswrapper[26425]: I0217 15:32:08.879012 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="10e298020284b0e8ffa6a0bc184059d9" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:32:09.133952 master-0 kubenswrapper[26425]: I0217 15:32:09.133789 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:09.133952 master-0 kubenswrapper[26425]: I0217 15:32:09.133845 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d78c63f7-c5a4-4990-9307-341ff59d3959" Feb 17 15:32:09.137258 master-0 kubenswrapper[26425]: I0217 15:32:09.137194 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="10e298020284b0e8ffa6a0bc184059d9" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:32:14.287013 master-0 kubenswrapper[26425]: I0217 15:32:14.286932 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:14.288130 master-0 kubenswrapper[26425]: I0217 15:32:14.287025 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:14.967118 master-0 kubenswrapper[26425]: I0217 15:32:14.967010 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:14.967118 master-0 kubenswrapper[26425]: I0217 15:32:14.967081 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:18.441429 master-0 kubenswrapper[26425]: I0217 15:32:18.441356 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:32:18.949804 master-0 kubenswrapper[26425]: I0217 15:32:18.949714 26425 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:32:19.021755 master-0 kubenswrapper[26425]: I0217 15:32:19.021693 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 17 15:32:19.174053 master-0 kubenswrapper[26425]: I0217 15:32:19.173997 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:32:19.705106 master-0 kubenswrapper[26425]: I0217 15:32:19.705019 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 17 15:32:19.847564 master-0 kubenswrapper[26425]: I0217 15:32:19.847494 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:32:19.963965 master-0 kubenswrapper[26425]: I0217 15:32:19.963286 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 17 15:32:20.119960 master-0 kubenswrapper[26425]: I0217 15:32:20.119874 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 15:32:20.160254 master-0 kubenswrapper[26425]: I0217 15:32:20.160178 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:32:20.476841 master-0 kubenswrapper[26425]: I0217 15:32:20.476770 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 15:32:20.515311 master-0 kubenswrapper[26425]: I0217 15:32:20.515197 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mv24c" Feb 17 15:32:20.571034 master-0 kubenswrapper[26425]: I0217 15:32:20.570953 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:32:20.743530 master-0 kubenswrapper[26425]: I0217 15:32:20.743082 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 15:32:20.765789 master-0 kubenswrapper[26425]: I0217 15:32:20.765598 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:32:20.789297 master-0 kubenswrapper[26425]: I0217 15:32:20.789231 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:32:20.804918 master-0 kubenswrapper[26425]: I0217 15:32:20.804868 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8lvkh" Feb 17 15:32:20.812284 master-0 kubenswrapper[26425]: I0217 15:32:20.812207 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 17 15:32:21.168718 master-0 kubenswrapper[26425]: I0217 15:32:21.168664 26425 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:32:21.271414 master-0 kubenswrapper[26425]: I0217 15:32:21.271338 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:32:21.540859 master-0 kubenswrapper[26425]: I0217 15:32:21.540605 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 17 15:32:21.655969 master-0 kubenswrapper[26425]: I0217 15:32:21.655875 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:32:21.713696 master-0 kubenswrapper[26425]: I0217 15:32:21.713574 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 17 15:32:21.784196 master-0 kubenswrapper[26425]: I0217 15:32:21.784079 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:32:21.791364 master-0 kubenswrapper[26425]: I0217 15:32:21.791284 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:32:21.867751 master-0 kubenswrapper[26425]: I0217 15:32:21.867650 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:32:22.005837 master-0 kubenswrapper[26425]: I0217 15:32:22.005787 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:32:22.243062 master-0 kubenswrapper[26425]: I0217 15:32:22.242978 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2tsl8" Feb 17 15:32:22.662506 master-0 kubenswrapper[26425]: I0217 15:32:22.662379 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dzmf4" Feb 17 15:32:22.712182 master-0 kubenswrapper[26425]: I0217 15:32:22.712077 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-t5n74" Feb 17 15:32:22.776600 master-0 kubenswrapper[26425]: I0217 15:32:22.776505 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtqvr" Feb 17 15:32:22.792040 master-0 kubenswrapper[26425]: I0217 15:32:22.791962 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 17 15:32:22.901984 master-0 kubenswrapper[26425]: I0217 15:32:22.901883 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 17 15:32:22.980991 master-0 kubenswrapper[26425]: I0217 15:32:22.980798 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:32:23.051302 master-0 kubenswrapper[26425]: I0217 15:32:23.051203 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 17 15:32:23.260180 master-0 kubenswrapper[26425]: I0217 15:32:23.260035 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-qjmzn" Feb 17 15:32:23.263320 master-0 kubenswrapper[26425]: I0217 15:32:23.263278 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 17 15:32:23.394094 master-0 kubenswrapper[26425]: I0217 15:32:23.394016 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 15:32:23.456000 master-0 kubenswrapper[26425]: I0217 15:32:23.455938 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-bt8x4" Feb 17 15:32:23.481796 master-0 kubenswrapper[26425]: I0217 15:32:23.481745 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 17 15:32:23.628205 master-0 kubenswrapper[26425]: I0217 15:32:23.628142 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 15:32:23.636577 master-0 kubenswrapper[26425]: I0217 15:32:23.636507 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:32:23.687901 master-0 kubenswrapper[26425]: I0217 15:32:23.687837 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 15:32:23.700061 master-0 kubenswrapper[26425]: I0217 15:32:23.700017 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 17 15:32:23.717558 master-0 kubenswrapper[26425]: I0217 15:32:23.717506 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:32:23.789347 master-0 kubenswrapper[26425]: I0217 15:32:23.789293 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:32:23.899923 master-0 kubenswrapper[26425]: I0217 15:32:23.899725 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 15:32:24.023422 master-0 kubenswrapper[26425]: I0217 15:32:24.023328 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:32:24.039671 master-0 kubenswrapper[26425]: I0217 15:32:24.039602 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 15:32:24.077364 master-0 kubenswrapper[26425]: I0217 15:32:24.077283 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:32:24.097703 master-0 kubenswrapper[26425]: I0217 15:32:24.097622 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 15:32:24.125441 master-0 kubenswrapper[26425]: I0217 15:32:24.125364 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:32:24.207794 master-0 kubenswrapper[26425]: I0217 15:32:24.207618 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 17 15:32:24.245035 master-0 kubenswrapper[26425]: I0217 15:32:24.244966 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:32:24.248637 master-0 kubenswrapper[26425]: I0217 15:32:24.248578 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:32:24.271899 master-0 kubenswrapper[26425]: I0217 15:32:24.271820 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:32:24.282351 master-0 kubenswrapper[26425]: I0217 15:32:24.282281 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 17 15:32:24.286781 master-0 kubenswrapper[26425]: I0217 15:32:24.286722 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:24.286906 master-0 kubenswrapper[26425]: I0217 15:32:24.286813 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:24.379899 master-0 kubenswrapper[26425]: I0217 15:32:24.379792 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:32:24.492505 master-0 kubenswrapper[26425]: I0217 15:32:24.492305 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 15:32:24.506281 master-0 kubenswrapper[26425]: I0217 15:32:24.505226 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:32:24.543444 master-0 kubenswrapper[26425]: I0217 15:32:24.543363 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 17 15:32:24.617633 master-0 kubenswrapper[26425]: I0217 15:32:24.617549 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 15:32:24.654578 master-0 kubenswrapper[26425]: I0217 15:32:24.653976 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:32:24.655554 master-0 kubenswrapper[26425]: I0217 15:32:24.655450 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:32:24.689568 master-0 kubenswrapper[26425]: I0217 15:32:24.689498 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:32:24.864572 master-0 kubenswrapper[26425]: I0217 15:32:24.864449 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:32:24.872339 master-0 kubenswrapper[26425]: I0217 15:32:24.872151 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4h7qp" Feb 17 15:32:24.884708 master-0 kubenswrapper[26425]: I0217 15:32:24.884634 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 15:32:24.896108 master-0 kubenswrapper[26425]: I0217 15:32:24.896042 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-q9xjb" Feb 17 15:32:24.924346 master-0 kubenswrapper[26425]: I0217 15:32:24.924214 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:32:24.927987 master-0 kubenswrapper[26425]: I0217 15:32:24.927840 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-r65rc" Feb 17 15:32:24.966729 master-0 kubenswrapper[26425]: I0217 15:32:24.966663 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:24.966981 master-0 kubenswrapper[26425]: I0217 15:32:24.966742 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:25.031913 master-0 kubenswrapper[26425]: I0217 15:32:25.031825 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-eu11557dmf9qt" Feb 17 15:32:25.245572 master-0 kubenswrapper[26425]: I0217 15:32:25.245321 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dxkwv" Feb 17 15:32:25.305151 master-0 kubenswrapper[26425]: I0217 15:32:25.305101 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 15:32:25.382225 master-0 kubenswrapper[26425]: I0217 15:32:25.382146 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:32:25.383154 master-0 kubenswrapper[26425]: I0217 15:32:25.383086 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:32:25.401026 master-0 kubenswrapper[26425]: I0217 15:32:25.400945 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:32:25.409354 master-0 kubenswrapper[26425]: I0217 15:32:25.409292 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7f2w9" Feb 17 15:32:25.421424 master-0 kubenswrapper[26425]: I0217 15:32:25.421328 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:32:25.479227 master-0 kubenswrapper[26425]: I0217 15:32:25.478953 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 17 15:32:25.559310 master-0 kubenswrapper[26425]: I0217 15:32:25.559098 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:32:25.609529 master-0 kubenswrapper[26425]: I0217 15:32:25.604836 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:32:25.672170 master-0 kubenswrapper[26425]: I0217 15:32:25.672085 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:32:25.725860 master-0 kubenswrapper[26425]: I0217 15:32:25.725797 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:32:25.755641 master-0 kubenswrapper[26425]: I0217 15:32:25.755497 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:32:25.778857 master-0 kubenswrapper[26425]: I0217 15:32:25.778775 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:32:25.839359 master-0 kubenswrapper[26425]: I0217 15:32:25.839224 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:32:25.872145 master-0 kubenswrapper[26425]: I0217 15:32:25.872061 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:32:25.905979 master-0 kubenswrapper[26425]: I0217 15:32:25.905123 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:32:25.968380 master-0 kubenswrapper[26425]: I0217 15:32:25.968285 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:32:25.976524 master-0 kubenswrapper[26425]: I0217 15:32:25.976481 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:32:25.990809 master-0 kubenswrapper[26425]: I0217 15:32:25.990757 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:32:26.088772 master-0 kubenswrapper[26425]: I0217 15:32:26.088701 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:32:26.139332 master-0 kubenswrapper[26425]: I0217 15:32:26.139278 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:32:26.153046 master-0 kubenswrapper[26425]: I0217 15:32:26.152962 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:32:26.295757 master-0 kubenswrapper[26425]: I0217 15:32:26.295634 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:32:26.304412 master-0 kubenswrapper[26425]: I0217 15:32:26.304337 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:32:26.325790 master-0 kubenswrapper[26425]: I0217 15:32:26.325742 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:32:26.327200 master-0 kubenswrapper[26425]: I0217 15:32:26.327160 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:32:26.425198 master-0 kubenswrapper[26425]: I0217 15:32:26.425027 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-kcv7p" Feb 17 15:32:26.444106 master-0 kubenswrapper[26425]: I0217 15:32:26.444064 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 17 15:32:26.513056 master-0 kubenswrapper[26425]: I0217 15:32:26.512984 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 15:32:26.521716 master-0 kubenswrapper[26425]: I0217 15:32:26.521647 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:32:26.555137 master-0 kubenswrapper[26425]: I0217 15:32:26.555090 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 17 15:32:26.599034 master-0 kubenswrapper[26425]: I0217 15:32:26.598965 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 15:32:26.645277 master-0 kubenswrapper[26425]: I0217 15:32:26.645202 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:32:26.662094 master-0 kubenswrapper[26425]: I0217 15:32:26.662052 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:32:26.682827 master-0 kubenswrapper[26425]: I0217 15:32:26.682660 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-kt686" Feb 17 15:32:26.708505 master-0 kubenswrapper[26425]: I0217 15:32:26.708375 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:32:26.773044 master-0 kubenswrapper[26425]: I0217 15:32:26.772990 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:32:26.996817 master-0 kubenswrapper[26425]: I0217 15:32:26.996635 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 15:32:27.027501 master-0 kubenswrapper[26425]: I0217 15:32:27.027418 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:32:27.027928 master-0 kubenswrapper[26425]: I0217 15:32:27.027418 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 17 15:32:27.079588 master-0 kubenswrapper[26425]: I0217 15:32:27.079408 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:32:27.095901 master-0 kubenswrapper[26425]: I0217 15:32:27.095830 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:32:27.109801 master-0 kubenswrapper[26425]: I0217 15:32:27.109725 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 15:32:27.226970 master-0 kubenswrapper[26425]: I0217 15:32:27.226899 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:32:27.253807 master-0 kubenswrapper[26425]: I0217 15:32:27.253617 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:32:27.285869 master-0 kubenswrapper[26425]: I0217 15:32:27.285773 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 17 15:32:27.328759 master-0 kubenswrapper[26425]: I0217 15:32:27.328632 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 15:32:27.416393 master-0 kubenswrapper[26425]: I0217 15:32:27.416322 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pv4xc" Feb 17 15:32:27.487496 master-0 kubenswrapper[26425]: I0217 15:32:27.487402 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:32:27.494354 master-0 kubenswrapper[26425]: I0217 15:32:27.494280 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:32:27.548603 master-0 kubenswrapper[26425]: I0217 15:32:27.548366 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:32:27.642287 master-0 kubenswrapper[26425]: I0217 15:32:27.642212 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:32:27.643003 master-0 kubenswrapper[26425]: I0217 15:32:27.642947 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:32:27.694026 master-0 kubenswrapper[26425]: I0217 15:32:27.693911 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:32:27.702276 master-0 kubenswrapper[26425]: I0217 15:32:27.702225 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:32:27.708158 master-0 kubenswrapper[26425]: I0217 15:32:27.708118 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:32:27.755059 master-0 kubenswrapper[26425]: I0217 15:32:27.754981 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:32:27.757662 master-0 kubenswrapper[26425]: I0217 15:32:27.757600 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7d1hat1ob2dke" Feb 17 15:32:27.820863 master-0 kubenswrapper[26425]: I0217 15:32:27.820703 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:32:27.862009 master-0 kubenswrapper[26425]: I0217 15:32:27.861948 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:32:27.877718 master-0 kubenswrapper[26425]: I0217 15:32:27.877671 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:32:27.923527 master-0 kubenswrapper[26425]: I0217 15:32:27.923450 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 17 15:32:27.925169 master-0 kubenswrapper[26425]: I0217 15:32:27.925106 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 17 15:32:27.951292 master-0 kubenswrapper[26425]: I0217 15:32:27.951219 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:32:28.053220 master-0 kubenswrapper[26425]: I0217 15:32:28.053158 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:32:28.061874 master-0 kubenswrapper[26425]: I0217 15:32:28.061793 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:32:28.154993 master-0 kubenswrapper[26425]: I0217 15:32:28.154924 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:32:28.179871 master-0 kubenswrapper[26425]: I0217 15:32:28.179822 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:32:28.209989 master-0 kubenswrapper[26425]: I0217 15:32:28.209908 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 15:32:28.275899 master-0 kubenswrapper[26425]: I0217 15:32:28.275847 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-4cctd" Feb 17 15:32:28.407884 master-0 kubenswrapper[26425]: I0217 15:32:28.407721 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:32:28.483377 master-0 kubenswrapper[26425]: I0217 15:32:28.483280 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:32:28.527776 master-0 kubenswrapper[26425]: I0217 15:32:28.527724 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:32:28.748219 master-0 kubenswrapper[26425]: I0217 15:32:28.748045 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:32:28.756936 master-0 kubenswrapper[26425]: I0217 15:32:28.756869 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:32:28.761036 master-0 kubenswrapper[26425]: I0217 15:32:28.760975 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:32:28.778529 master-0 kubenswrapper[26425]: I0217 15:32:28.778439 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 15:32:28.813759 master-0 kubenswrapper[26425]: I0217 15:32:28.813681 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:32:28.826381 master-0 kubenswrapper[26425]: I0217 15:32:28.826309 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:32:28.863126 master-0 kubenswrapper[26425]: I0217 15:32:28.863045 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:32:28.951761 master-0 kubenswrapper[26425]: I0217 15:32:28.951668 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:32:29.029930 master-0 kubenswrapper[26425]: I0217 15:32:29.028063 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:32:29.066755 master-0 kubenswrapper[26425]: I0217 15:32:29.066663 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:32:29.121975 master-0 kubenswrapper[26425]: I0217 15:32:29.120164 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:32:29.146246 master-0 kubenswrapper[26425]: I0217 15:32:29.145892 26425 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:32:29.220519 master-0 kubenswrapper[26425]: I0217 15:32:29.220291 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:32:29.233719 master-0 kubenswrapper[26425]: I0217 15:32:29.233296 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:32:29.285423 master-0 kubenswrapper[26425]: I0217 15:32:29.285256 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 17 15:32:29.301282 master-0 kubenswrapper[26425]: I0217 15:32:29.301207 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:32:29.471244 master-0 kubenswrapper[26425]: I0217 15:32:29.471107 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dz667" Feb 17 15:32:29.562487 master-0 kubenswrapper[26425]: I0217 15:32:29.562243 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:32:29.564986 master-0 kubenswrapper[26425]: I0217 15:32:29.564928 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 17 15:32:29.567622 master-0 kubenswrapper[26425]: I0217 15:32:29.567576 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 17 15:32:29.569909 master-0 kubenswrapper[26425]: I0217 15:32:29.569869 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 15:32:29.578404 master-0 kubenswrapper[26425]: I0217 15:32:29.578347 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dkdg8" Feb 17 15:32:29.745186 master-0 kubenswrapper[26425]: I0217 15:32:29.745099 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 15:32:29.820769 master-0 kubenswrapper[26425]: I0217 15:32:29.820638 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 17 15:32:29.890361 master-0 kubenswrapper[26425]: I0217 15:32:29.890308 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 17 15:32:29.890839 master-0 kubenswrapper[26425]: I0217 15:32:29.890339 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 15:32:29.920602 master-0 kubenswrapper[26425]: I0217 15:32:29.920494 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:32:29.989510 master-0 kubenswrapper[26425]: I0217 15:32:29.989389 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-bw92c" Feb 17 15:32:30.034531 master-0 kubenswrapper[26425]: I0217 15:32:30.034413 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:32:30.066749 master-0 kubenswrapper[26425]: I0217 15:32:30.066688 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 17 15:32:30.083243 master-0 kubenswrapper[26425]: I0217 15:32:30.083089 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:32:30.083959 master-0 kubenswrapper[26425]: I0217 15:32:30.083603 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 15:32:30.101315 master-0 kubenswrapper[26425]: I0217 15:32:30.101228 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:32:30.101674 master-0 kubenswrapper[26425]: I0217 15:32:30.101626 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 15:32:30.115748 master-0 kubenswrapper[26425]: I0217 15:32:30.115659 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 15:32:30.146878 master-0 kubenswrapper[26425]: I0217 15:32:30.146759 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lgxgp" Feb 17 15:32:30.177688 master-0 kubenswrapper[26425]: I0217 15:32:30.177525 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-flbia8i8i4eih" Feb 17 15:32:30.230487 master-0 kubenswrapper[26425]: I0217 15:32:30.229945 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-tphvr" Feb 17 15:32:30.239828 master-0 kubenswrapper[26425]: I0217 15:32:30.237897 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:32:30.274487 master-0 kubenswrapper[26425]: I0217 15:32:30.272286 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 17 15:32:30.290561 master-0 kubenswrapper[26425]: I0217 15:32:30.290496 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:32:30.316479 master-0 kubenswrapper[26425]: I0217 15:32:30.314657 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:32:30.344291 master-0 kubenswrapper[26425]: I0217 15:32:30.344174 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8l4dg" Feb 17 15:32:30.381883 master-0 kubenswrapper[26425]: I0217 15:32:30.381837 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:32:30.421204 master-0 kubenswrapper[26425]: I0217 15:32:30.421147 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:32:30.442054 master-0 kubenswrapper[26425]: I0217 15:32:30.442007 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:32:30.498267 master-0 kubenswrapper[26425]: I0217 15:32:30.498224 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 17 15:32:30.771202 master-0 kubenswrapper[26425]: I0217 15:32:30.771143 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 15:32:30.799983 master-0 kubenswrapper[26425]: I0217 15:32:30.799843 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:32:30.839533 master-0 kubenswrapper[26425]: I0217 15:32:30.837590 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:32:30.839533 master-0 kubenswrapper[26425]: I0217 15:32:30.838732 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:32:31.004043 master-0 kubenswrapper[26425]: I0217 15:32:31.003936 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 17 15:32:31.067187 master-0 kubenswrapper[26425]: I0217 15:32:31.056320 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:32:31.069909 master-0 kubenswrapper[26425]: I0217 15:32:31.069834 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 15:32:31.097478 master-0 kubenswrapper[26425]: I0217 15:32:31.097377 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 15:32:31.123891 master-0 kubenswrapper[26425]: I0217 15:32:31.123671 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:32:31.134292 master-0 kubenswrapper[26425]: I0217 15:32:31.134231 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:32:31.142578 master-0 kubenswrapper[26425]: I0217 15:32:31.142533 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8gftr" Feb 17 15:32:31.232963 master-0 kubenswrapper[26425]: I0217 15:32:31.232855 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:32:31.378053 master-0 kubenswrapper[26425]: I0217 15:32:31.377951 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:32:31.403008 master-0 kubenswrapper[26425]: I0217 15:32:31.402974 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:32:31.437477 master-0 kubenswrapper[26425]: I0217 15:32:31.437395 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 15:32:31.570236 master-0 kubenswrapper[26425]: I0217 15:32:31.570141 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:32:31.647793 master-0 kubenswrapper[26425]: I0217 15:32:31.647673 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:32:31.765287 master-0 kubenswrapper[26425]: I0217 15:32:31.765201 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:32:31.807213 master-0 kubenswrapper[26425]: I0217 15:32:31.807145 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-7hvks" Feb 17 15:32:31.808065 master-0 kubenswrapper[26425]: I0217 15:32:31.807999 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:32:31.812970 master-0 kubenswrapper[26425]: I0217 15:32:31.812943 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-44zht" Feb 17 15:32:31.914723 master-0 kubenswrapper[26425]: I0217 15:32:31.914567 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:32:31.921916 master-0 kubenswrapper[26425]: I0217 15:32:31.921841 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:32:32.203835 master-0 kubenswrapper[26425]: I0217 15:32:32.203695 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-4gx6p" Feb 17 15:32:32.282397 master-0 kubenswrapper[26425]: I0217 15:32:32.282342 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:32:32.397001 master-0 kubenswrapper[26425]: I0217 15:32:32.396929 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:32:32.422892 master-0 kubenswrapper[26425]: I0217 15:32:32.422790 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 15:32:32.502627 master-0 kubenswrapper[26425]: I0217 15:32:32.497961 26425 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:32:32.640298 master-0 kubenswrapper[26425]: I0217 15:32:32.640220 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zhjq" Feb 17 15:32:32.640298 master-0 kubenswrapper[26425]: I0217 15:32:32.640266 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:32:32.726846 master-0 kubenswrapper[26425]: I0217 15:32:32.726662 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:32:32.733845 master-0 kubenswrapper[26425]: I0217 15:32:32.733782 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:32:32.814744 master-0 kubenswrapper[26425]: I0217 15:32:32.814645 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:32:32.921413 master-0 kubenswrapper[26425]: I0217 15:32:32.921316 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:32:33.181695 master-0 kubenswrapper[26425]: I0217 15:32:33.181606 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:32:33.443740 master-0 kubenswrapper[26425]: I0217 15:32:33.443576 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:32:33.445074 master-0 kubenswrapper[26425]: I0217 15:32:33.445035 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:32:33.462562 master-0 kubenswrapper[26425]: I0217 15:32:33.462449 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:32:33.471838 master-0 kubenswrapper[26425]: I0217 15:32:33.471797 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-fg558" Feb 17 15:32:33.490797 master-0 kubenswrapper[26425]: I0217 15:32:33.490757 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 15:32:33.628789 master-0 kubenswrapper[26425]: I0217 15:32:33.628704 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 15:32:33.653874 master-0 kubenswrapper[26425]: I0217 15:32:33.653801 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 15:32:33.816031 master-0 kubenswrapper[26425]: I0217 15:32:33.815874 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:32:33.852944 master-0 kubenswrapper[26425]: I0217 15:32:33.852872 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 17 15:32:33.856549 master-0 kubenswrapper[26425]: I0217 15:32:33.856510 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 15:32:33.918611 master-0 kubenswrapper[26425]: I0217 15:32:33.918498 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 17 15:32:33.981636 master-0 kubenswrapper[26425]: I0217 15:32:33.981511 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:32:34.083416 master-0 kubenswrapper[26425]: I0217 15:32:34.083233 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:32:34.155328 master-0 kubenswrapper[26425]: I0217 15:32:34.155236 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nsg9z" Feb 17 15:32:34.201578 master-0 kubenswrapper[26425]: I0217 15:32:34.201441 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:32:34.251155 master-0 kubenswrapper[26425]: I0217 15:32:34.251047 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 17 15:32:34.286400 master-0 kubenswrapper[26425]: I0217 15:32:34.286304 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:34.286400 master-0 kubenswrapper[26425]: I0217 15:32:34.286388 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:34.400760 master-0 kubenswrapper[26425]: I0217 15:32:34.400674 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:32:34.641492 master-0 kubenswrapper[26425]: I0217 15:32:34.641399 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 15:32:34.967156 master-0 kubenswrapper[26425]: I0217 15:32:34.967065 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:34.967512 master-0 kubenswrapper[26425]: I0217 15:32:34.967187 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:35.520687 master-0 kubenswrapper[26425]: I0217 15:32:35.520594 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:32:35.664978 master-0 kubenswrapper[26425]: I0217 15:32:35.664882 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:32:38.811032 master-0 kubenswrapper[26425]: I0217 15:32:38.810914 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:32:38.846039 master-0 kubenswrapper[26425]: I0217 15:32:38.845956 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:32:39.730564 master-0 kubenswrapper[26425]: I0217 15:32:39.730503 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:32:41.673817 master-0 kubenswrapper[26425]: I0217 15:32:41.673728 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:32:42.361796 master-0 kubenswrapper[26425]: I0217 15:32:42.361680 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t9g75" Feb 17 15:32:43.817934 master-0 kubenswrapper[26425]: I0217 15:32:43.817852 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 17 15:32:44.286668 master-0 kubenswrapper[26425]: I0217 15:32:44.286582 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:44.287052 master-0 kubenswrapper[26425]: I0217 15:32:44.286675 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:44.967208 master-0 kubenswrapper[26425]: I0217 15:32:44.967105 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:44.968203 master-0 kubenswrapper[26425]: I0217 15:32:44.967209 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:46.411701 master-0 kubenswrapper[26425]: I0217 15:32:46.411618 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:32:47.238368 master-0 kubenswrapper[26425]: I0217 15:32:47.238284 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:32:48.707849 master-0 kubenswrapper[26425]: I0217 15:32:48.707757 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 17 15:32:49.214792 master-0 kubenswrapper[26425]: I0217 15:32:49.214700 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:32:49.820386 master-0 kubenswrapper[26425]: I0217 15:32:49.820291 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 17 15:32:49.896097 master-0 kubenswrapper[26425]: I0217 15:32:49.895959 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:32:50.232785 master-0 kubenswrapper[26425]: I0217 15:32:50.232703 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 15:32:50.399537 master-0 kubenswrapper[26425]: I0217 15:32:50.399433 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-v49sf" Feb 17 15:32:50.469926 master-0 kubenswrapper[26425]: I0217 15:32:50.469841 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:32:50.821360 master-0 kubenswrapper[26425]: I0217 15:32:50.821084 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 15:32:51.363990 master-0 kubenswrapper[26425]: I0217 15:32:51.363905 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:32:52.077902 master-0 kubenswrapper[26425]: I0217 15:32:52.077828 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 17 15:32:52.486321 master-0 kubenswrapper[26425]: I0217 15:32:52.486253 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:32:52.825808 master-0 kubenswrapper[26425]: I0217 15:32:52.825638 26425 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:32:52.828305 master-0 kubenswrapper[26425]: I0217 15:32:52.828192 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=70.828170416 podStartE2EDuration="1m10.828170416s" podCreationTimestamp="2026-02-17 15:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:32:08.735922869 +0000 UTC m=+990.627646767" watchObservedRunningTime="2026-02-17 15:32:52.828170416 +0000 UTC m=+1034.719894274" Feb 17 15:32:52.830543 master-0 kubenswrapper[26425]: I0217 15:32:52.830476 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=448.500419756 podStartE2EDuration="7m38.830444911s" podCreationTimestamp="2026-02-17 15:25:14 +0000 UTC" firstStartedPulling="2026-02-17 15:31:40.665740829 +0000 UTC m=+962.557464677" lastFinishedPulling="2026-02-17 15:31:50.995765974 +0000 UTC m=+972.887489832" observedRunningTime="2026-02-17 15:32:08.716592654 +0000 UTC m=+990.608316562" watchObservedRunningTime="2026-02-17 15:32:52.830444911 +0000 UTC m=+1034.722168759" Feb 17 15:32:52.834433 master-0 kubenswrapper[26425]: I0217 15:32:52.834364 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=448.51436772 podStartE2EDuration="7m38.834351043s" podCreationTimestamp="2026-02-17 15:25:14 +0000 UTC" firstStartedPulling="2026-02-17 15:31:40.658832873 +0000 UTC m=+962.550556701" lastFinishedPulling="2026-02-17 15:31:50.978816206 +0000 UTC m=+972.870540024" observedRunningTime="2026-02-17 15:32:08.846001384 +0000 UTC m=+990.737725212" watchObservedRunningTime="2026-02-17 15:32:52.834351043 +0000 UTC m=+1034.726074901" Feb 17 15:32:52.835960 master-0 kubenswrapper[26425]: I0217 15:32:52.835909 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:32:52.836077 master-0 kubenswrapper[26425]: I0217 15:32:52.835970 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:32:52.836405 master-0 kubenswrapper[26425]: E0217 15:32:52.836360 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" containerName="installer" Feb 17 15:32:52.836405 master-0 kubenswrapper[26425]: I0217 15:32:52.836391 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" containerName="installer" Feb 17 15:32:52.836744 master-0 kubenswrapper[26425]: I0217 15:32:52.836692 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="78028ec2-59c0-459d-b148-e84842b5aea8" containerName="installer" Feb 17 15:32:52.837510 master-0 kubenswrapper[26425]: I0217 15:32:52.837418 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.839972 master-0 kubenswrapper[26425]: I0217 15:32:52.839936 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:32:52.840777 master-0 kubenswrapper[26425]: I0217 15:32:52.840719 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.841629 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.842261 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.842516 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.843530 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.843983 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.844615 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:32:52.844732 master-0 kubenswrapper[26425]: I0217 15:32:52.845123 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:32:52.846545 master-0 kubenswrapper[26425]: I0217 15:32:52.846193 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:32:52.846545 master-0 kubenswrapper[26425]: I0217 15:32:52.846201 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:52.847435 master-0 kubenswrapper[26425]: I0217 15:32:52.847378 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:32:52.849265 master-0 kubenswrapper[26425]: I0217 15:32:52.849198 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:32:52.858629 master-0 kubenswrapper[26425]: I0217 15:32:52.857621 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:32:52.868815 master-0 kubenswrapper[26425]: I0217 15:32:52.868730 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:32:52.871910 master-0 kubenswrapper[26425]: I0217 15:32:52.871808 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=47.871784599 podStartE2EDuration="47.871784599s" podCreationTimestamp="2026-02-17 15:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:32:52.868322876 +0000 UTC m=+1034.760046784" watchObservedRunningTime="2026-02-17 15:32:52.871784599 +0000 UTC m=+1034.763508427" Feb 17 15:32:52.874679 master-0 kubenswrapper[26425]: I0217 15:32:52.874578 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.874833 master-0 kubenswrapper[26425]: I0217 15:32:52.874721 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.874833 master-0 kubenswrapper[26425]: I0217 15:32:52.874816 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875005 master-0 kubenswrapper[26425]: I0217 15:32:52.874855 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875005 master-0 kubenswrapper[26425]: I0217 15:32:52.874897 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb67\" (UniqueName: \"kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875005 master-0 kubenswrapper[26425]: I0217 15:32:52.874941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875222 master-0 kubenswrapper[26425]: I0217 15:32:52.875009 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875222 master-0 kubenswrapper[26425]: I0217 15:32:52.875066 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875222 master-0 kubenswrapper[26425]: I0217 15:32:52.875189 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875445 master-0 kubenswrapper[26425]: I0217 15:32:52.875239 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875445 master-0 kubenswrapper[26425]: I0217 15:32:52.875404 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875645 master-0 kubenswrapper[26425]: I0217 15:32:52.875501 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.875645 master-0 kubenswrapper[26425]: I0217 15:32:52.875553 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.978496 master-0 kubenswrapper[26425]: I0217 15:32:52.978394 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.978965 master-0 kubenswrapper[26425]: I0217 15:32:52.978519 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979081 master-0 kubenswrapper[26425]: I0217 15:32:52.979048 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979177 master-0 kubenswrapper[26425]: I0217 15:32:52.979106 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979177 master-0 kubenswrapper[26425]: I0217 15:32:52.979130 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979177 master-0 kubenswrapper[26425]: I0217 15:32:52.979153 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqb67\" (UniqueName: \"kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979390 master-0 kubenswrapper[26425]: I0217 15:32:52.979182 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979390 master-0 kubenswrapper[26425]: I0217 15:32:52.979335 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979390 master-0 kubenswrapper[26425]: I0217 15:32:52.979379 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979665 master-0 kubenswrapper[26425]: I0217 15:32:52.979496 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979665 master-0 kubenswrapper[26425]: I0217 15:32:52.979532 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979665 master-0 kubenswrapper[26425]: I0217 15:32:52.979582 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979665 master-0 kubenswrapper[26425]: I0217 15:32:52.979615 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.979972 master-0 kubenswrapper[26425]: I0217 15:32:52.979675 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.980138 master-0 kubenswrapper[26425]: I0217 15:32:52.979998 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.980627 master-0 kubenswrapper[26425]: I0217 15:32:52.980574 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.980744 master-0 kubenswrapper[26425]: I0217 15:32:52.980630 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.981427 master-0 kubenswrapper[26425]: I0217 15:32:52.981379 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.983656 master-0 kubenswrapper[26425]: I0217 15:32:52.983157 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.985026 master-0 kubenswrapper[26425]: I0217 15:32:52.984362 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.988103 master-0 kubenswrapper[26425]: I0217 15:32:52.988016 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.989985 master-0 kubenswrapper[26425]: I0217 15:32:52.989895 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.990262 master-0 kubenswrapper[26425]: I0217 15:32:52.990216 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.990617 master-0 kubenswrapper[26425]: I0217 15:32:52.990564 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:52.991085 master-0 kubenswrapper[26425]: I0217 15:32:52.991013 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:53.007893 master-0 kubenswrapper[26425]: I0217 15:32:53.007817 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqb67\" (UniqueName: \"kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67\") pod \"oauth-openshift-5cdd6dbfff-tvzt9\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:53.025332 master-0 kubenswrapper[26425]: I0217 15:32:53.025284 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:32:53.040798 master-0 kubenswrapper[26425]: I0217 15:32:53.040742 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gbdz4" Feb 17 15:32:53.177335 master-0 kubenswrapper[26425]: I0217 15:32:53.177186 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:53.619207 master-0 kubenswrapper[26425]: I0217 15:32:53.619112 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:32:53.691841 master-0 kubenswrapper[26425]: I0217 15:32:53.691780 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9"] Feb 17 15:32:53.703203 master-0 kubenswrapper[26425]: I0217 15:32:53.703160 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:32:53.817609 master-0 kubenswrapper[26425]: I0217 15:32:53.817535 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" event={"ID":"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2","Type":"ContainerStarted","Data":"771e3b7cf2128460346c20bf4c7f139d8f2d8f3c17bc2a42c92d90885ec8ced1"} Feb 17 15:32:53.930948 master-0 kubenswrapper[26425]: I0217 15:32:53.930753 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 15:32:54.062821 master-0 kubenswrapper[26425]: I0217 15:32:54.062712 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:32:54.287315 master-0 kubenswrapper[26425]: I0217 15:32:54.287040 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:32:54.287315 master-0 kubenswrapper[26425]: I0217 15:32:54.287153 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:32:54.438974 master-0 kubenswrapper[26425]: I0217 15:32:54.438826 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6c645" Feb 17 15:32:54.966791 master-0 kubenswrapper[26425]: I0217 15:32:54.966711 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:32:54.966791 master-0 kubenswrapper[26425]: I0217 15:32:54.966784 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:32:54.968978 master-0 kubenswrapper[26425]: I0217 15:32:54.968924 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:32:55.165307 master-0 kubenswrapper[26425]: I0217 15:32:55.165241 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:32:55.206555 master-0 kubenswrapper[26425]: I0217 15:32:55.206499 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:32:55.218870 master-0 kubenswrapper[26425]: I0217 15:32:55.218644 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:32:55.927372 master-0 kubenswrapper[26425]: I0217 15:32:55.927288 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:32:56.347113 master-0 kubenswrapper[26425]: I0217 15:32:56.346968 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-tkkqz" Feb 17 15:32:56.565252 master-0 kubenswrapper[26425]: I0217 15:32:56.565180 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 17 15:32:56.851562 master-0 kubenswrapper[26425]: I0217 15:32:56.851370 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" event={"ID":"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2","Type":"ContainerStarted","Data":"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7"} Feb 17 15:32:56.852892 master-0 kubenswrapper[26425]: I0217 15:32:56.852804 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:56.891493 master-0 kubenswrapper[26425]: I0217 15:32:56.891358 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" podStartSLOduration=40.224926979 podStartE2EDuration="42.891332225s" podCreationTimestamp="2026-02-17 15:32:14 +0000 UTC" firstStartedPulling="2026-02-17 15:32:53.703042912 +0000 UTC m=+1035.594766770" lastFinishedPulling="2026-02-17 15:32:56.369448178 +0000 UTC m=+1038.261172016" observedRunningTime="2026-02-17 15:32:56.882904064 +0000 UTC m=+1038.774627952" watchObservedRunningTime="2026-02-17 15:32:56.891332225 +0000 UTC m=+1038.783056083" Feb 17 15:32:57.049983 master-0 kubenswrapper[26425]: I0217 15:32:57.049767 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:32:57.096921 master-0 kubenswrapper[26425]: I0217 15:32:57.096867 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:32:57.865988 master-0 kubenswrapper[26425]: I0217 15:32:57.865835 26425 generic.go:334] "Generic (PLEG): container finished" podID="c6d23570-21d6-4b08-83fc-8b0827c25313" containerID="e21db6dc3c89ccc946938faf692a644d12c8c796e73f855223bea13cf801bb39" exitCode=0 Feb 17 15:32:57.865988 master-0 kubenswrapper[26425]: I0217 15:32:57.865930 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerDied","Data":"e21db6dc3c89ccc946938faf692a644d12c8c796e73f855223bea13cf801bb39"} Feb 17 15:32:57.866389 master-0 kubenswrapper[26425]: I0217 15:32:57.866045 26425 scope.go:117] "RemoveContainer" containerID="2784ec26a7dc2f4e62d2f496a1d001e9cb435129496d0a04f4f22a42f1a50608" Feb 17 15:32:57.867020 master-0 kubenswrapper[26425]: I0217 15:32:57.866948 26425 scope.go:117] "RemoveContainer" containerID="e21db6dc3c89ccc946938faf692a644d12c8c796e73f855223bea13cf801bb39" Feb 17 15:32:58.721233 master-0 kubenswrapper[26425]: I0217 15:32:58.721128 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 15:32:58.878568 master-0 kubenswrapper[26425]: I0217 15:32:58.878436 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" event={"ID":"c6d23570-21d6-4b08-83fc-8b0827c25313","Type":"ContainerStarted","Data":"1d9ad10ff8dc271ef079ac636e030117f456f6a11f65535e439cbe1f6c536fc5"} Feb 17 15:32:58.879631 master-0 kubenswrapper[26425]: I0217 15:32:58.879162 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:32:58.881710 master-0 kubenswrapper[26425]: I0217 15:32:58.881644 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh" Feb 17 15:32:59.237254 master-0 kubenswrapper[26425]: I0217 15:32:59.237171 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 15:32:59.561736 master-0 kubenswrapper[26425]: I0217 15:32:59.561587 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:33:00.587060 master-0 kubenswrapper[26425]: I0217 15:33:00.586969 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:33:00.649524 master-0 kubenswrapper[26425]: I0217 15:33:00.649408 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 17 15:33:00.679592 master-0 kubenswrapper[26425]: I0217 15:33:00.679077 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:33:01.768701 master-0 kubenswrapper[26425]: I0217 15:33:01.768608 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:33:02.241415 master-0 kubenswrapper[26425]: I0217 15:33:02.241347 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jd7jr" Feb 17 15:33:03.055424 master-0 kubenswrapper[26425]: I0217 15:33:03.055341 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 15:33:03.571076 master-0 kubenswrapper[26425]: I0217 15:33:03.571018 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:33:03.972965 master-0 kubenswrapper[26425]: I0217 15:33:03.972870 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:33:04.287157 master-0 kubenswrapper[26425]: I0217 15:33:04.286982 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:33:04.287157 master-0 kubenswrapper[26425]: I0217 15:33:04.287064 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:33:04.433407 master-0 kubenswrapper[26425]: I0217 15:33:04.433326 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:33:04.433713 master-0 kubenswrapper[26425]: I0217 15:33:04.433669 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" containerID="cri-o://4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3" gracePeriod=5 Feb 17 15:33:04.718072 master-0 kubenswrapper[26425]: I0217 15:33:04.718005 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:33:04.858998 master-0 kubenswrapper[26425]: I0217 15:33:04.858916 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:33:04.966852 master-0 kubenswrapper[26425]: I0217 15:33:04.966758 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:33:04.967136 master-0 kubenswrapper[26425]: I0217 15:33:04.966853 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:33:05.217550 master-0 kubenswrapper[26425]: I0217 15:33:05.217473 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:33:05.256062 master-0 kubenswrapper[26425]: I0217 15:33:05.255983 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:33:05.337243 master-0 kubenswrapper[26425]: I0217 15:33:05.337171 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:33:05.368687 master-0 kubenswrapper[26425]: I0217 15:33:05.368614 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:33:05.880685 master-0 kubenswrapper[26425]: I0217 15:33:05.880625 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:33:06.189848 master-0 kubenswrapper[26425]: I0217 15:33:06.189670 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 15:33:06.568914 master-0 kubenswrapper[26425]: I0217 15:33:06.568777 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 17 15:33:07.002372 master-0 kubenswrapper[26425]: I0217 15:33:07.002294 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:33:07.088603 master-0 kubenswrapper[26425]: I0217 15:33:07.080374 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:33:08.354945 master-0 kubenswrapper[26425]: I0217 15:33:08.354874 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 17 15:33:08.399103 master-0 kubenswrapper[26425]: I0217 15:33:08.399055 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-jf6tv" Feb 17 15:33:08.468792 master-0 kubenswrapper[26425]: I0217 15:33:08.468727 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:33:08.875964 master-0 kubenswrapper[26425]: I0217 15:33:08.875885 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:33:09.603808 master-0 kubenswrapper[26425]: I0217 15:33:09.603753 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/0.log" Feb 17 15:33:09.604459 master-0 kubenswrapper[26425]: I0217 15:33:09.603853 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:33:09.709202 master-0 kubenswrapper[26425]: I0217 15:33:09.709070 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 17 15:33:09.709202 master-0 kubenswrapper[26425]: I0217 15:33:09.709113 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 17 15:33:09.709202 master-0 kubenswrapper[26425]: I0217 15:33:09.709191 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 17 15:33:09.709202 master-0 kubenswrapper[26425]: I0217 15:33:09.709205 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 17 15:33:09.709727 master-0 kubenswrapper[26425]: I0217 15:33:09.709244 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 17 15:33:09.709727 master-0 kubenswrapper[26425]: I0217 15:33:09.709628 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:33:09.709727 master-0 kubenswrapper[26425]: I0217 15:33:09.709669 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log" (OuterVolumeSpecName: "var-log") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:33:09.709727 master-0 kubenswrapper[26425]: I0217 15:33:09.709690 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock" (OuterVolumeSpecName: "var-lock") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:33:09.709979 master-0 kubenswrapper[26425]: I0217 15:33:09.709884 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests" (OuterVolumeSpecName: "manifests") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:33:09.717613 master-0 kubenswrapper[26425]: I0217 15:33:09.717573 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:33:09.811833 master-0 kubenswrapper[26425]: I0217 15:33:09.811770 26425 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:09.811833 master-0 kubenswrapper[26425]: I0217 15:33:09.811826 26425 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:09.812120 master-0 kubenswrapper[26425]: I0217 15:33:09.811844 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:09.812120 master-0 kubenswrapper[26425]: I0217 15:33:09.811862 26425 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:09.812120 master-0 kubenswrapper[26425]: I0217 15:33:09.811878 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:09.979669 master-0 kubenswrapper[26425]: I0217 15:33:09.979478 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/0.log" Feb 17 15:33:09.979669 master-0 kubenswrapper[26425]: I0217 15:33:09.979531 26425 generic.go:334] "Generic (PLEG): container finished" podID="32286c81635de6de1cf7f328273c1a49" containerID="4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3" exitCode=137 Feb 17 15:33:09.979669 master-0 kubenswrapper[26425]: I0217 15:33:09.979569 26425 scope.go:117] "RemoveContainer" containerID="4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3" Feb 17 15:33:09.979669 master-0 kubenswrapper[26425]: I0217 15:33:09.979641 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:33:10.002971 master-0 kubenswrapper[26425]: I0217 15:33:10.002934 26425 scope.go:117] "RemoveContainer" containerID="4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3" Feb 17 15:33:10.003535 master-0 kubenswrapper[26425]: E0217 15:33:10.003490 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3\": container with ID starting with 4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3 not found: ID does not exist" containerID="4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3" Feb 17 15:33:10.003599 master-0 kubenswrapper[26425]: I0217 15:33:10.003548 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3"} err="failed to get container status \"4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3\": rpc error: code = NotFound desc = could not find container \"4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3\": container with ID starting with 4b02007b99efaca413d6539c39e30b074c5f7a4327fab4f7d8375b8dcf8656a3 not found: ID does not exist" Feb 17 15:33:10.414742 master-0 kubenswrapper[26425]: I0217 15:33:10.414677 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32286c81635de6de1cf7f328273c1a49" path="/var/lib/kubelet/pods/32286c81635de6de1cf7f328273c1a49/volumes" Feb 17 15:33:10.415055 master-0 kubenswrapper[26425]: I0217 15:33:10.415018 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 17 15:33:10.435023 master-0 kubenswrapper[26425]: I0217 15:33:10.434911 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:33:10.435023 master-0 kubenswrapper[26425]: I0217 15:33:10.434984 26425 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="ba39fc70-62e8-41d5-95f3-0e27983508a2" Feb 17 15:33:10.456517 master-0 kubenswrapper[26425]: I0217 15:33:10.441582 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:33:10.456517 master-0 kubenswrapper[26425]: I0217 15:33:10.441659 26425 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="ba39fc70-62e8-41d5-95f3-0e27983508a2" Feb 17 15:33:11.138630 master-0 kubenswrapper[26425]: I0217 15:33:11.138556 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:33:11.153192 master-0 kubenswrapper[26425]: I0217 15:33:11.153146 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:33:11.171435 master-0 kubenswrapper[26425]: I0217 15:33:11.171357 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 17 15:33:11.835759 master-0 kubenswrapper[26425]: I0217 15:33:11.835674 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 15:33:12.576830 master-0 kubenswrapper[26425]: I0217 15:33:12.576759 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-c8lzf" Feb 17 15:33:12.657140 master-0 kubenswrapper[26425]: I0217 15:33:12.657039 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:33:13.139187 master-0 kubenswrapper[26425]: I0217 15:33:13.139122 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:33:13.817787 master-0 kubenswrapper[26425]: I0217 15:33:13.817712 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:33:14.203998 master-0 kubenswrapper[26425]: I0217 15:33:14.203907 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:33:14.286994 master-0 kubenswrapper[26425]: I0217 15:33:14.286905 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:33:14.287368 master-0 kubenswrapper[26425]: I0217 15:33:14.287014 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:33:14.967162 master-0 kubenswrapper[26425]: I0217 15:33:14.967075 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:33:14.968298 master-0 kubenswrapper[26425]: I0217 15:33:14.967181 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:33:15.436372 master-0 kubenswrapper[26425]: I0217 15:33:15.436277 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:33:24.286377 master-0 kubenswrapper[26425]: I0217 15:33:24.286307 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:33:24.286377 master-0 kubenswrapper[26425]: I0217 15:33:24.286367 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:33:24.967399 master-0 kubenswrapper[26425]: I0217 15:33:24.967317 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:33:24.967728 master-0 kubenswrapper[26425]: I0217 15:33:24.967423 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:33:30.102949 master-0 kubenswrapper[26425]: I0217 15:33:30.102884 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:30.103549 master-0 kubenswrapper[26425]: I0217 15:33:30.103341 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-web" containerID="cri-o://51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" gracePeriod=120 Feb 17 15:33:30.103549 master-0 kubenswrapper[26425]: I0217 15:33:30.103354 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-metric" containerID="cri-o://b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" gracePeriod=120 Feb 17 15:33:30.103549 master-0 kubenswrapper[26425]: I0217 15:33:30.103428 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy" containerID="cri-o://1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" gracePeriod=120 Feb 17 15:33:30.103549 master-0 kubenswrapper[26425]: I0217 15:33:30.103517 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="prom-label-proxy" containerID="cri-o://48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" gracePeriod=120 Feb 17 15:33:30.103715 master-0 kubenswrapper[26425]: I0217 15:33:30.103382 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="config-reloader" containerID="cri-o://716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" gracePeriod=120 Feb 17 15:33:30.103766 master-0 kubenswrapper[26425]: I0217 15:33:30.103714 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="alertmanager" containerID="cri-o://b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" gracePeriod=120 Feb 17 15:33:30.612673 master-0 kubenswrapper[26425]: I0217 15:33:30.612606 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:30.707546 master-0 kubenswrapper[26425]: I0217 15:33:30.707377 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707546 master-0 kubenswrapper[26425]: I0217 15:33:30.707520 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707589 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707659 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707727 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707771 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707810 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks8hc\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707845 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.707917 master-0 kubenswrapper[26425]: I0217 15:33:30.707904 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.708407 master-0 kubenswrapper[26425]: I0217 15:33:30.707946 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.708407 master-0 kubenswrapper[26425]: I0217 15:33:30.707994 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.708407 master-0 kubenswrapper[26425]: I0217 15:33:30.708023 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") pod \"1115aa66-7b5c-4863-aa91-b28baff7e922\" (UID: \"1115aa66-7b5c-4863-aa91-b28baff7e922\") " Feb 17 15:33:30.709226 master-0 kubenswrapper[26425]: I0217 15:33:30.708955 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:30.712376 master-0 kubenswrapper[26425]: I0217 15:33:30.712311 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:33:30.713501 master-0 kubenswrapper[26425]: I0217 15:33:30.713348 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.716318 master-0 kubenswrapper[26425]: I0217 15:33:30.716061 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:30.720273 master-0 kubenswrapper[26425]: I0217 15:33:30.720165 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.721634 master-0 kubenswrapper[26425]: I0217 15:33:30.720872 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out" (OuterVolumeSpecName: "config-out") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:33:30.721634 master-0 kubenswrapper[26425]: I0217 15:33:30.721119 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:33:30.728017 master-0 kubenswrapper[26425]: I0217 15:33:30.727915 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.730975 master-0 kubenswrapper[26425]: I0217 15:33:30.730889 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume" (OuterVolumeSpecName: "config-volume") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.735485 master-0 kubenswrapper[26425]: I0217 15:33:30.735357 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.737846 master-0 kubenswrapper[26425]: I0217 15:33:30.737716 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc" (OuterVolumeSpecName: "kube-api-access-ks8hc") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "kube-api-access-ks8hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:33:30.800031 master-0 kubenswrapper[26425]: I0217 15:33:30.799844 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config" (OuterVolumeSpecName: "web-config") pod "1115aa66-7b5c-4863-aa91-b28baff7e922" (UID: "1115aa66-7b5c-4863-aa91-b28baff7e922"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:30.809392 master-0 kubenswrapper[26425]: I0217 15:33:30.809325 26425 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.809392 master-0 kubenswrapper[26425]: I0217 15:33:30.809365 26425 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-config-out\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.809392 master-0 kubenswrapper[26425]: I0217 15:33:30.809380 26425 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.809392 master-0 kubenswrapper[26425]: I0217 15:33:30.809390 26425 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1115aa66-7b5c-4863-aa91-b28baff7e922-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.809392 master-0 kubenswrapper[26425]: I0217 15:33:30.809403 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks8hc\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-kube-api-access-ks8hc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809413 26425 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809424 26425 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-web-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809432 26425 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809442 26425 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1115aa66-7b5c-4863-aa91-b28baff7e922-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809468 26425 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809480 26425 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1115aa66-7b5c-4863-aa91-b28baff7e922-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:30.810169 master-0 kubenswrapper[26425]: I0217 15:33:30.809492 26425 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1115aa66-7b5c-4863-aa91-b28baff7e922-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:31.186349 master-0 kubenswrapper[26425]: I0217 15:33:31.186307 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" exitCode=0 Feb 17 15:33:31.186349 master-0 kubenswrapper[26425]: I0217 15:33:31.186339 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" exitCode=0 Feb 17 15:33:31.186349 master-0 kubenswrapper[26425]: I0217 15:33:31.186350 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" exitCode=0 Feb 17 15:33:31.186349 master-0 kubenswrapper[26425]: I0217 15:33:31.186357 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" exitCode=0 Feb 17 15:33:31.186349 master-0 kubenswrapper[26425]: I0217 15:33:31.186365 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" exitCode=0 Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186374 26425 generic.go:334] "Generic (PLEG): container finished" podID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" exitCode=0 Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186410 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186550 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186437 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186607 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186584 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186841 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186914 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186944 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} Feb 17 15:33:31.187261 master-0 kubenswrapper[26425]: I0217 15:33:31.186975 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1115aa66-7b5c-4863-aa91-b28baff7e922","Type":"ContainerDied","Data":"3dda46c86732a971ca11da424c7442ca446195bdca599d8c908ab71c564b253e"} Feb 17 15:33:31.220271 master-0 kubenswrapper[26425]: I0217 15:33:31.220139 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.284144 master-0 kubenswrapper[26425]: I0217 15:33:31.284106 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.294385 master-0 kubenswrapper[26425]: I0217 15:33:31.287964 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:31.297777 master-0 kubenswrapper[26425]: I0217 15:33:31.297696 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:31.324978 master-0 kubenswrapper[26425]: I0217 15:33:31.324897 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.337996 master-0 kubenswrapper[26425]: I0217 15:33:31.337922 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338299 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="init-config-reloader" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338323 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="init-config-reloader" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338337 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-metric" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338345 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-metric" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338362 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338371 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338389 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-web" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338397 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-web" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338431 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="alertmanager" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338439 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="alertmanager" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338452 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="config-reloader" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338489 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="config-reloader" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338523 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="prom-label-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338534 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="prom-label-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: E0217 15:33:31.338557 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338565 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338714 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338729 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="prom-label-proxy" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338762 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="alertmanager" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338776 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338789 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-web" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338805 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="config-reloader" Feb 17 15:33:31.338808 master-0 kubenswrapper[26425]: I0217 15:33:31.338821 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" containerName="kube-rbac-proxy-metric" Feb 17 15:33:31.342671 master-0 kubenswrapper[26425]: I0217 15:33:31.342625 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.346792 master-0 kubenswrapper[26425]: I0217 15:33:31.345183 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 15:33:31.346792 master-0 kubenswrapper[26425]: I0217 15:33:31.345659 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 15:33:31.347533 master-0 kubenswrapper[26425]: I0217 15:33:31.347489 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 15:33:31.347623 master-0 kubenswrapper[26425]: I0217 15:33:31.347545 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 15:33:31.347737 master-0 kubenswrapper[26425]: I0217 15:33:31.347695 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 15:33:31.347822 master-0 kubenswrapper[26425]: I0217 15:33:31.347786 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pv4xc" Feb 17 15:33:31.348067 master-0 kubenswrapper[26425]: I0217 15:33:31.347986 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 15:33:31.348067 master-0 kubenswrapper[26425]: I0217 15:33:31.348048 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 15:33:31.355477 master-0 kubenswrapper[26425]: I0217 15:33:31.355410 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 15:33:31.358166 master-0 kubenswrapper[26425]: I0217 15:33:31.358119 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.367120 master-0 kubenswrapper[26425]: I0217 15:33:31.367066 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:31.420609 master-0 kubenswrapper[26425]: I0217 15:33:31.420560 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.442026 master-0 kubenswrapper[26425]: I0217 15:33:31.441981 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.459923 master-0 kubenswrapper[26425]: I0217 15:33:31.459881 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.460201 master-0 kubenswrapper[26425]: E0217 15:33:31.460165 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.460274 master-0 kubenswrapper[26425]: I0217 15:33:31.460199 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.460274 master-0 kubenswrapper[26425]: I0217 15:33:31.460220 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.460564 master-0 kubenswrapper[26425]: E0217 15:33:31.460538 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.460564 master-0 kubenswrapper[26425]: I0217 15:33:31.460558 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.460697 master-0 kubenswrapper[26425]: I0217 15:33:31.460571 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.461108 master-0 kubenswrapper[26425]: E0217 15:33:31.461071 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.461108 master-0 kubenswrapper[26425]: I0217 15:33:31.461095 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.461108 master-0 kubenswrapper[26425]: I0217 15:33:31.461107 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.461537 master-0 kubenswrapper[26425]: E0217 15:33:31.461446 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.461699 master-0 kubenswrapper[26425]: I0217 15:33:31.461663 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.461795 master-0 kubenswrapper[26425]: I0217 15:33:31.461780 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.462188 master-0 kubenswrapper[26425]: E0217 15:33:31.462158 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.462188 master-0 kubenswrapper[26425]: I0217 15:33:31.462183 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.462309 master-0 kubenswrapper[26425]: I0217 15:33:31.462197 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.462746 master-0 kubenswrapper[26425]: E0217 15:33:31.462723 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.462746 master-0 kubenswrapper[26425]: I0217 15:33:31.462743 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.462902 master-0 kubenswrapper[26425]: I0217 15:33:31.462759 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.463084 master-0 kubenswrapper[26425]: E0217 15:33:31.463054 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.463238 master-0 kubenswrapper[26425]: I0217 15:33:31.463211 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.463341 master-0 kubenswrapper[26425]: I0217 15:33:31.463322 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.463720 master-0 kubenswrapper[26425]: I0217 15:33:31.463673 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.463720 master-0 kubenswrapper[26425]: I0217 15:33:31.463694 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.464191 master-0 kubenswrapper[26425]: I0217 15:33:31.464166 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.464305 master-0 kubenswrapper[26425]: I0217 15:33:31.464288 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.464639 master-0 kubenswrapper[26425]: I0217 15:33:31.464609 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.464639 master-0 kubenswrapper[26425]: I0217 15:33:31.464633 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.465022 master-0 kubenswrapper[26425]: I0217 15:33:31.464997 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.465199 master-0 kubenswrapper[26425]: I0217 15:33:31.465125 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.465611 master-0 kubenswrapper[26425]: I0217 15:33:31.465587 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.465712 master-0 kubenswrapper[26425]: I0217 15:33:31.465697 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.466127 master-0 kubenswrapper[26425]: I0217 15:33:31.466086 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.466223 master-0 kubenswrapper[26425]: I0217 15:33:31.466129 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.466793 master-0 kubenswrapper[26425]: I0217 15:33:31.466767 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.466793 master-0 kubenswrapper[26425]: I0217 15:33:31.466788 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.467048 master-0 kubenswrapper[26425]: I0217 15:33:31.467022 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.467230 master-0 kubenswrapper[26425]: I0217 15:33:31.467211 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.467798 master-0 kubenswrapper[26425]: I0217 15:33:31.467770 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.467798 master-0 kubenswrapper[26425]: I0217 15:33:31.467791 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.468200 master-0 kubenswrapper[26425]: I0217 15:33:31.468176 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.468312 master-0 kubenswrapper[26425]: I0217 15:33:31.468294 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.468666 master-0 kubenswrapper[26425]: I0217 15:33:31.468640 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.468666 master-0 kubenswrapper[26425]: I0217 15:33:31.468659 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.469061 master-0 kubenswrapper[26425]: I0217 15:33:31.469030 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.469061 master-0 kubenswrapper[26425]: I0217 15:33:31.469053 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.469330 master-0 kubenswrapper[26425]: I0217 15:33:31.469304 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.469427 master-0 kubenswrapper[26425]: I0217 15:33:31.469411 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.469746 master-0 kubenswrapper[26425]: I0217 15:33:31.469719 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.469746 master-0 kubenswrapper[26425]: I0217 15:33:31.469740 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.470136 master-0 kubenswrapper[26425]: I0217 15:33:31.470109 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.470239 master-0 kubenswrapper[26425]: I0217 15:33:31.470222 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.470806 master-0 kubenswrapper[26425]: I0217 15:33:31.470748 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.470806 master-0 kubenswrapper[26425]: I0217 15:33:31.470799 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.471245 master-0 kubenswrapper[26425]: I0217 15:33:31.471213 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.471245 master-0 kubenswrapper[26425]: I0217 15:33:31.471236 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.471659 master-0 kubenswrapper[26425]: I0217 15:33:31.471634 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.471806 master-0 kubenswrapper[26425]: I0217 15:33:31.471789 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.472278 master-0 kubenswrapper[26425]: I0217 15:33:31.472244 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.472278 master-0 kubenswrapper[26425]: I0217 15:33:31.472276 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.472945 master-0 kubenswrapper[26425]: I0217 15:33:31.472916 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.472945 master-0 kubenswrapper[26425]: I0217 15:33:31.472943 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.473374 master-0 kubenswrapper[26425]: I0217 15:33:31.473349 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.473537 master-0 kubenswrapper[26425]: I0217 15:33:31.473520 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.473891 master-0 kubenswrapper[26425]: I0217 15:33:31.473861 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.473891 master-0 kubenswrapper[26425]: I0217 15:33:31.473886 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.474279 master-0 kubenswrapper[26425]: I0217 15:33:31.474254 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.474279 master-0 kubenswrapper[26425]: I0217 15:33:31.474274 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.476081 master-0 kubenswrapper[26425]: I0217 15:33:31.476015 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.476081 master-0 kubenswrapper[26425]: I0217 15:33:31.476051 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.476579 master-0 kubenswrapper[26425]: I0217 15:33:31.476528 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.476679 master-0 kubenswrapper[26425]: I0217 15:33:31.476592 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.477006 master-0 kubenswrapper[26425]: I0217 15:33:31.476975 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.477166 master-0 kubenswrapper[26425]: I0217 15:33:31.477150 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.477575 master-0 kubenswrapper[26425]: I0217 15:33:31.477529 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.477575 master-0 kubenswrapper[26425]: I0217 15:33:31.477570 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.478185 master-0 kubenswrapper[26425]: I0217 15:33:31.478121 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.478185 master-0 kubenswrapper[26425]: I0217 15:33:31.478172 26425 scope.go:117] "RemoveContainer" containerID="48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801" Feb 17 15:33:31.478712 master-0 kubenswrapper[26425]: I0217 15:33:31.478679 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801"} err="failed to get container status \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": rpc error: code = NotFound desc = could not find container \"48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801\": container with ID starting with 48eefb4c47082e47705982b6d3e23ed0d0fd6d81619e032eaaaf2c26c367e801 not found: ID does not exist" Feb 17 15:33:31.479039 master-0 kubenswrapper[26425]: I0217 15:33:31.478837 26425 scope.go:117] "RemoveContainer" containerID="b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b" Feb 17 15:33:31.479784 master-0 kubenswrapper[26425]: I0217 15:33:31.479760 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b"} err="failed to get container status \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": rpc error: code = NotFound desc = could not find container \"b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b\": container with ID starting with b9340cfae46f05bea55370f2378c378ea04dbb16e7dabdee6960fdb56405946b not found: ID does not exist" Feb 17 15:33:31.479924 master-0 kubenswrapper[26425]: I0217 15:33:31.479908 26425 scope.go:117] "RemoveContainer" containerID="1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917" Feb 17 15:33:31.480344 master-0 kubenswrapper[26425]: I0217 15:33:31.480310 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917"} err="failed to get container status \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": rpc error: code = NotFound desc = could not find container \"1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917\": container with ID starting with 1225ec6f862608c487e1fc739c0883960553aa7368eb1407f6beceddb5a55917 not found: ID does not exist" Feb 17 15:33:31.480657 master-0 kubenswrapper[26425]: I0217 15:33:31.480619 26425 scope.go:117] "RemoveContainer" containerID="51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0" Feb 17 15:33:31.482015 master-0 kubenswrapper[26425]: I0217 15:33:31.481987 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0"} err="failed to get container status \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": rpc error: code = NotFound desc = could not find container \"51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0\": container with ID starting with 51bf7c2cc5787731f9a4e056c9d025a75ca18796d70ed53c1754fd589052bff0 not found: ID does not exist" Feb 17 15:33:31.482143 master-0 kubenswrapper[26425]: I0217 15:33:31.482126 26425 scope.go:117] "RemoveContainer" containerID="716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd" Feb 17 15:33:31.482548 master-0 kubenswrapper[26425]: I0217 15:33:31.482518 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd"} err="failed to get container status \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": rpc error: code = NotFound desc = could not find container \"716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd\": container with ID starting with 716aab54fa16dddb4c1c062f6fbbe3252031ed76d173360c315c957ed91493cd not found: ID does not exist" Feb 17 15:33:31.482548 master-0 kubenswrapper[26425]: I0217 15:33:31.482542 26425 scope.go:117] "RemoveContainer" containerID="b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e" Feb 17 15:33:31.482978 master-0 kubenswrapper[26425]: I0217 15:33:31.482949 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e"} err="failed to get container status \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": rpc error: code = NotFound desc = could not find container \"b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e\": container with ID starting with b0be91c8243360cd5d86c9079631d5d383ef2bdaf894fed427267cc0ed4ef78e not found: ID does not exist" Feb 17 15:33:31.483104 master-0 kubenswrapper[26425]: I0217 15:33:31.483080 26425 scope.go:117] "RemoveContainer" containerID="1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9" Feb 17 15:33:31.483597 master-0 kubenswrapper[26425]: I0217 15:33:31.483550 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9"} err="failed to get container status \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": rpc error: code = NotFound desc = could not find container \"1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9\": container with ID starting with 1f05b09fc33c6329ac0a4af27d77f0b42789d7f21d0bdef18a8db7f03f55d9e9 not found: ID does not exist" Feb 17 15:33:31.524538 master-0 kubenswrapper[26425]: I0217 15:33:31.524484 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524758 master-0 kubenswrapper[26425]: I0217 15:33:31.524668 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5hl\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-kube-api-access-nr5hl\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524838 master-0 kubenswrapper[26425]: I0217 15:33:31.524795 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524838 master-0 kubenswrapper[26425]: I0217 15:33:31.524830 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524911 master-0 kubenswrapper[26425]: I0217 15:33:31.524872 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524945 master-0 kubenswrapper[26425]: I0217 15:33:31.524909 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.524945 master-0 kubenswrapper[26425]: I0217 15:33:31.524940 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.525008 master-0 kubenswrapper[26425]: I0217 15:33:31.524990 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-web-config\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.525067 master-0 kubenswrapper[26425]: I0217 15:33:31.525037 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.525155 master-0 kubenswrapper[26425]: I0217 15:33:31.525127 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.525313 master-0 kubenswrapper[26425]: I0217 15:33:31.525297 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.525392 master-0 kubenswrapper[26425]: I0217 15:33:31.525375 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-out\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.627506 master-0 kubenswrapper[26425]: I0217 15:33:31.627417 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5hl\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-kube-api-access-nr5hl\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.627938 master-0 kubenswrapper[26425]: I0217 15:33:31.627898 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628052 master-0 kubenswrapper[26425]: I0217 15:33:31.627956 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628052 master-0 kubenswrapper[26425]: I0217 15:33:31.627993 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628052 master-0 kubenswrapper[26425]: I0217 15:33:31.628036 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628244 master-0 kubenswrapper[26425]: I0217 15:33:31.628077 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628303 master-0 kubenswrapper[26425]: I0217 15:33:31.628275 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-web-config\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628362 master-0 kubenswrapper[26425]: I0217 15:33:31.628349 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628419 master-0 kubenswrapper[26425]: I0217 15:33:31.628391 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628502 master-0 kubenswrapper[26425]: I0217 15:33:31.628428 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628502 master-0 kubenswrapper[26425]: I0217 15:33:31.628483 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-out\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.628995 master-0 kubenswrapper[26425]: I0217 15:33:31.628893 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.629974 master-0 kubenswrapper[26425]: I0217 15:33:31.629916 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.632345 master-0 kubenswrapper[26425]: I0217 15:33:31.632305 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0014ea-f94d-4153-be2f-7cac6a262f6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.633694 master-0 kubenswrapper[26425]: I0217 15:33:31.633651 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.634009 master-0 kubenswrapper[26425]: I0217 15:33:31.633968 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.634107 master-0 kubenswrapper[26425]: I0217 15:33:31.634088 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.634270 master-0 kubenswrapper[26425]: I0217 15:33:31.634073 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.635818 master-0 kubenswrapper[26425]: I0217 15:33:31.635744 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.635976 master-0 kubenswrapper[26425]: I0217 15:33:31.635929 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.637993 master-0 kubenswrapper[26425]: I0217 15:33:31.637919 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b0014ea-f94d-4153-be2f-7cac6a262f6f-config-out\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.637993 master-0 kubenswrapper[26425]: I0217 15:33:31.637963 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b0014ea-f94d-4153-be2f-7cac6a262f6f-web-config\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.645183 master-0 kubenswrapper[26425]: I0217 15:33:31.645119 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.654915 master-0 kubenswrapper[26425]: I0217 15:33:31.654822 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5hl\" (UniqueName: \"kubernetes.io/projected/0b0014ea-f94d-4153-be2f-7cac6a262f6f-kube-api-access-nr5hl\") pod \"alertmanager-main-0\" (UID: \"0b0014ea-f94d-4153-be2f-7cac6a262f6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:31.718560 master-0 kubenswrapper[26425]: I0217 15:33:31.716999 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 15:33:32.121320 master-0 kubenswrapper[26425]: I0217 15:33:32.121242 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 15:33:32.130528 master-0 kubenswrapper[26425]: W0217 15:33:32.130432 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0014ea_f94d_4153_be2f_7cac6a262f6f.slice/crio-9cae375d8f3763264c1c3641a165b01406f971b8f8ee71316af7c70780dbfc0c WatchSource:0}: Error finding container 9cae375d8f3763264c1c3641a165b01406f971b8f8ee71316af7c70780dbfc0c: Status 404 returned error can't find the container with id 9cae375d8f3763264c1c3641a165b01406f971b8f8ee71316af7c70780dbfc0c Feb 17 15:33:32.196760 master-0 kubenswrapper[26425]: I0217 15:33:32.196706 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"9cae375d8f3763264c1c3641a165b01406f971b8f8ee71316af7c70780dbfc0c"} Feb 17 15:33:32.402536 master-0 kubenswrapper[26425]: I0217 15:33:32.402415 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1115aa66-7b5c-4863-aa91-b28baff7e922" path="/var/lib/kubelet/pods/1115aa66-7b5c-4863-aa91-b28baff7e922/volumes" Feb 17 15:33:33.207295 master-0 kubenswrapper[26425]: I0217 15:33:33.207239 26425 generic.go:334] "Generic (PLEG): container finished" podID="0b0014ea-f94d-4153-be2f-7cac6a262f6f" containerID="30fa4a7b6154cfe6ea20f7de62d9983f1cd86a2fb287f9aabf7c596737a8e53f" exitCode=0 Feb 17 15:33:33.207295 master-0 kubenswrapper[26425]: I0217 15:33:33.207289 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerDied","Data":"30fa4a7b6154cfe6ea20f7de62d9983f1cd86a2fb287f9aabf7c596737a8e53f"} Feb 17 15:33:34.219024 master-0 kubenswrapper[26425]: I0217 15:33:34.218971 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"79df9cb261a9e54a858ca7d6908cf7aa0354dbbca09b11f777f5126de4e349d7"} Feb 17 15:33:34.219024 master-0 kubenswrapper[26425]: I0217 15:33:34.219021 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"12c78fdc4afb3625a4e12e1bbd1f864621f18e9e98002eeb08c6a31c674ca199"} Feb 17 15:33:34.219024 master-0 kubenswrapper[26425]: I0217 15:33:34.219035 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"eb383d4696be202f62dc16e3dd11981d8c16aaa67f842f41b39f2e35d161fd3c"} Feb 17 15:33:34.219723 master-0 kubenswrapper[26425]: I0217 15:33:34.219048 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"03b55da5f8082edb7a8e017e0a3e938e777c97a89ab41b276b39015a66f2aa64"} Feb 17 15:33:34.219723 master-0 kubenswrapper[26425]: I0217 15:33:34.219060 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"ee0324321df5eb86962ade08daa520a781282cc7d99933d3f8f42f3069cb2eb7"} Feb 17 15:33:34.219723 master-0 kubenswrapper[26425]: I0217 15:33:34.219072 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"0b0014ea-f94d-4153-be2f-7cac6a262f6f","Type":"ContainerStarted","Data":"7bdad5314c0882fbb229dd8929e3c5cdbfc287223116ca121236416c8585e7dd"} Feb 17 15:33:34.258404 master-0 kubenswrapper[26425]: I0217 15:33:34.258321 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.258301257 podStartE2EDuration="3.258301257s" podCreationTimestamp="2026-02-17 15:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:33:34.252759724 +0000 UTC m=+1076.144483572" watchObservedRunningTime="2026-02-17 15:33:34.258301257 +0000 UTC m=+1076.150025085" Feb 17 15:33:34.286820 master-0 kubenswrapper[26425]: I0217 15:33:34.286772 26425 patch_prober.go:28] interesting pod/console-86d4dfb9dd-rz6cj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 17 15:33:34.287053 master-0 kubenswrapper[26425]: I0217 15:33:34.286831 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 17 15:33:34.967084 master-0 kubenswrapper[26425]: I0217 15:33:34.967002 26425 patch_prober.go:28] interesting pod/console-98f66b5dc-p2gxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 17 15:33:34.967084 master-0 kubenswrapper[26425]: I0217 15:33:34.967066 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 17 15:33:38.814181 master-0 kubenswrapper[26425]: I0217 15:33:38.812329 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9"] Feb 17 15:33:39.911938 master-0 kubenswrapper[26425]: I0217 15:33:39.911862 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:33:39.987196 master-0 kubenswrapper[26425]: I0217 15:33:39.987126 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:33:39.988135 master-0 kubenswrapper[26425]: I0217 15:33:39.988097 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.002048 master-0 kubenswrapper[26425]: I0217 15:33:40.002003 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:33:40.081977 master-0 kubenswrapper[26425]: I0217 15:33:40.081894 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t8cs\" (UniqueName: \"kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082014 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082047 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082090 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082116 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082154 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.082225 master-0 kubenswrapper[26425]: I0217 15:33:40.082190 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183560 master-0 kubenswrapper[26425]: I0217 15:33:40.183393 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8cs\" (UniqueName: \"kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183560 master-0 kubenswrapper[26425]: I0217 15:33:40.183541 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183822 master-0 kubenswrapper[26425]: I0217 15:33:40.183593 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183822 master-0 kubenswrapper[26425]: I0217 15:33:40.183627 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183822 master-0 kubenswrapper[26425]: I0217 15:33:40.183662 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183822 master-0 kubenswrapper[26425]: I0217 15:33:40.183693 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.183822 master-0 kubenswrapper[26425]: I0217 15:33:40.183709 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.185207 master-0 kubenswrapper[26425]: I0217 15:33:40.185171 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.185302 master-0 kubenswrapper[26425]: I0217 15:33:40.185173 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.185361 master-0 kubenswrapper[26425]: I0217 15:33:40.185296 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.186112 master-0 kubenswrapper[26425]: I0217 15:33:40.186070 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.187629 master-0 kubenswrapper[26425]: I0217 15:33:40.187578 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.187789 master-0 kubenswrapper[26425]: I0217 15:33:40.187753 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.210486 master-0 kubenswrapper[26425]: I0217 15:33:40.209361 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t8cs\" (UniqueName: \"kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs\") pod \"console-55495f9f9c-p58l5\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.305072 master-0 kubenswrapper[26425]: I0217 15:33:40.305009 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:40.754810 master-0 kubenswrapper[26425]: I0217 15:33:40.752420 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:33:40.760192 master-0 kubenswrapper[26425]: W0217 15:33:40.760150 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25188d19_3aa1_4346_8547_d571600db2f6.slice/crio-c919f83e99626c37e5d712791608a69f58ea6e2cafe4520a3a46c722951734b6 WatchSource:0}: Error finding container c919f83e99626c37e5d712791608a69f58ea6e2cafe4520a3a46c722951734b6: Status 404 returned error can't find the container with id c919f83e99626c37e5d712791608a69f58ea6e2cafe4520a3a46c722951734b6 Feb 17 15:33:41.275582 master-0 kubenswrapper[26425]: I0217 15:33:41.275495 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55495f9f9c-p58l5" event={"ID":"25188d19-3aa1-4346-8547-d571600db2f6","Type":"ContainerStarted","Data":"62bc0a47ef7fb54261a0ebfba7d1d86c84145d8edec6583defa98ae636c4a32e"} Feb 17 15:33:41.275582 master-0 kubenswrapper[26425]: I0217 15:33:41.275550 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55495f9f9c-p58l5" event={"ID":"25188d19-3aa1-4346-8547-d571600db2f6","Type":"ContainerStarted","Data":"c919f83e99626c37e5d712791608a69f58ea6e2cafe4520a3a46c722951734b6"} Feb 17 15:33:41.310995 master-0 kubenswrapper[26425]: I0217 15:33:41.310900 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55495f9f9c-p58l5" podStartSLOduration=2.310884045 podStartE2EDuration="2.310884045s" podCreationTimestamp="2026-02-17 15:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:33:41.306134221 +0000 UTC m=+1083.197858059" watchObservedRunningTime="2026-02-17 15:33:41.310884045 +0000 UTC m=+1083.202607863" Feb 17 15:33:43.839709 master-0 kubenswrapper[26425]: I0217 15:33:43.839207 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:43.839709 master-0 kubenswrapper[26425]: I0217 15:33:43.839645 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="prometheus" containerID="cri-o://4447ceb23c1d4facb08760700abd426c411bbf6b4811632582d89ef957716e66" gracePeriod=600 Feb 17 15:33:43.842524 master-0 kubenswrapper[26425]: I0217 15:33:43.839708 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy" containerID="cri-o://c17e6e0ffb2100550235ef51822ac385fadd80df618190dad159ce0d25c6aeda" gracePeriod=600 Feb 17 15:33:43.842524 master-0 kubenswrapper[26425]: I0217 15:33:43.839763 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-web" containerID="cri-o://30845f09794de19ccb491a056c81a6e3440a61b00911226c4004f95138579471" gracePeriod=600 Feb 17 15:33:43.842524 master-0 kubenswrapper[26425]: I0217 15:33:43.839773 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="config-reloader" containerID="cri-o://5b880952e43c162fdf7249d632e1b7db55215a5ce8dea0be9d7f9249af484e1b" gracePeriod=600 Feb 17 15:33:43.842524 master-0 kubenswrapper[26425]: I0217 15:33:43.839912 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-thanos" containerID="cri-o://3bef16d6a5c7c4c3b645d3c355aa1a41faba5d711790e01525694cbdeb738180" gracePeriod=600 Feb 17 15:33:43.842524 master-0 kubenswrapper[26425]: I0217 15:33:43.839714 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="thanos-sidecar" containerID="cri-o://755bcfc2451098b86204efb1064608fc839aaba5498c364378fe3e4492975625" gracePeriod=600 Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: I0217 15:33:43.858763 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="prometheus" probeResult="failure" output=< Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: Dload Upload Total Spent Left Speed Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: [166B blob data] Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: curl: (22) The requested URL returned error: 503 Feb 17 15:33:43.859060 master-0 kubenswrapper[26425]: > Feb 17 15:33:43.872106 master-0 kubenswrapper[26425]: I0217 15:33:43.871716 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:33:43.970420 master-0 kubenswrapper[26425]: I0217 15:33:43.970369 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:33:43.971783 master-0 kubenswrapper[26425]: I0217 15:33:43.971732 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:43.984543 master-0 kubenswrapper[26425]: I0217 15:33:43.982962 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:33:44.171315 master-0 kubenswrapper[26425]: I0217 15:33:44.171262 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171409 master-0 kubenswrapper[26425]: I0217 15:33:44.171370 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171446 master-0 kubenswrapper[26425]: I0217 15:33:44.171429 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171557 master-0 kubenswrapper[26425]: I0217 15:33:44.171474 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171557 master-0 kubenswrapper[26425]: I0217 15:33:44.171523 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171663 master-0 kubenswrapper[26425]: I0217 15:33:44.171574 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54npd\" (UniqueName: \"kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.171663 master-0 kubenswrapper[26425]: I0217 15:33:44.171615 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274031 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54npd\" (UniqueName: \"kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274090 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274117 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274162 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274203 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274219 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.274702 master-0 kubenswrapper[26425]: I0217 15:33:44.274252 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.278089 master-0 kubenswrapper[26425]: I0217 15:33:44.275674 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.278089 master-0 kubenswrapper[26425]: I0217 15:33:44.276803 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.278089 master-0 kubenswrapper[26425]: I0217 15:33:44.277328 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.278494 master-0 kubenswrapper[26425]: I0217 15:33:44.278444 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.286271 master-0 kubenswrapper[26425]: I0217 15:33:44.284291 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.286922 master-0 kubenswrapper[26425]: I0217 15:33:44.286854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.297104 master-0 kubenswrapper[26425]: I0217 15:33:44.295906 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54npd\" (UniqueName: \"kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd\") pod \"console-6f45cc898f-z9tb2\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.310703 master-0 kubenswrapper[26425]: I0217 15:33:44.310658 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="3bef16d6a5c7c4c3b645d3c355aa1a41faba5d711790e01525694cbdeb738180" exitCode=0 Feb 17 15:33:44.310703 master-0 kubenswrapper[26425]: I0217 15:33:44.310690 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="c17e6e0ffb2100550235ef51822ac385fadd80df618190dad159ce0d25c6aeda" exitCode=0 Feb 17 15:33:44.310703 master-0 kubenswrapper[26425]: I0217 15:33:44.310700 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="30845f09794de19ccb491a056c81a6e3440a61b00911226c4004f95138579471" exitCode=0 Feb 17 15:33:44.310703 master-0 kubenswrapper[26425]: I0217 15:33:44.310709 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="755bcfc2451098b86204efb1064608fc839aaba5498c364378fe3e4492975625" exitCode=0 Feb 17 15:33:44.310703 master-0 kubenswrapper[26425]: I0217 15:33:44.310715 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="5b880952e43c162fdf7249d632e1b7db55215a5ce8dea0be9d7f9249af484e1b" exitCode=0 Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310723 26425 generic.go:334] "Generic (PLEG): container finished" podID="7284bcca-864c-40df-b7dc-9aecf470697a" containerID="4447ceb23c1d4facb08760700abd426c411bbf6b4811632582d89ef957716e66" exitCode=0 Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310742 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"3bef16d6a5c7c4c3b645d3c355aa1a41faba5d711790e01525694cbdeb738180"} Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310831 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"c17e6e0ffb2100550235ef51822ac385fadd80df618190dad159ce0d25c6aeda"} Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310852 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"30845f09794de19ccb491a056c81a6e3440a61b00911226c4004f95138579471"} Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310864 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"755bcfc2451098b86204efb1064608fc839aaba5498c364378fe3e4492975625"} Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310873 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"5b880952e43c162fdf7249d632e1b7db55215a5ce8dea0be9d7f9249af484e1b"} Feb 17 15:33:44.311172 master-0 kubenswrapper[26425]: I0217 15:33:44.310884 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"4447ceb23c1d4facb08760700abd426c411bbf6b4811632582d89ef957716e66"} Feb 17 15:33:44.416619 master-0 kubenswrapper[26425]: I0217 15:33:44.416566 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:44.422212 master-0 kubenswrapper[26425]: I0217 15:33:44.422175 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:44.578133 master-0 kubenswrapper[26425]: I0217 15:33:44.578004 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vsgn\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578133 master-0 kubenswrapper[26425]: I0217 15:33:44.578085 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578278 master-0 kubenswrapper[26425]: I0217 15:33:44.578135 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578278 master-0 kubenswrapper[26425]: I0217 15:33:44.578173 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578278 master-0 kubenswrapper[26425]: I0217 15:33:44.578195 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578278 master-0 kubenswrapper[26425]: I0217 15:33:44.578234 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578404 master-0 kubenswrapper[26425]: I0217 15:33:44.578291 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578404 master-0 kubenswrapper[26425]: I0217 15:33:44.578326 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578404 master-0 kubenswrapper[26425]: I0217 15:33:44.578355 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.578404 master-0 kubenswrapper[26425]: I0217 15:33:44.578401 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578427 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578479 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578541 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578571 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578594 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578618 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578645 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.578679 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config\") pod \"7284bcca-864c-40df-b7dc-9aecf470697a\" (UID: \"7284bcca-864c-40df-b7dc-9aecf470697a\") " Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.579250 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:44.579962 master-0 kubenswrapper[26425]: I0217 15:33:44.579296 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:44.580678 master-0 kubenswrapper[26425]: I0217 15:33:44.580633 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.581758 master-0 kubenswrapper[26425]: I0217 15:33:44.581720 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:44.582295 master-0 kubenswrapper[26425]: I0217 15:33:44.582247 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:44.583132 master-0 kubenswrapper[26425]: I0217 15:33:44.583103 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:33:44.583782 master-0 kubenswrapper[26425]: I0217 15:33:44.583719 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.584091 master-0 kubenswrapper[26425]: I0217 15:33:44.584046 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config" (OuterVolumeSpecName: "config") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.584683 master-0 kubenswrapper[26425]: I0217 15:33:44.584626 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn" (OuterVolumeSpecName: "kube-api-access-8vsgn") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "kube-api-access-8vsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:33:44.587745 master-0 kubenswrapper[26425]: I0217 15:33:44.587641 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out" (OuterVolumeSpecName: "config-out") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:33:44.587745 master-0 kubenswrapper[26425]: I0217 15:33:44.587660 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.587745 master-0 kubenswrapper[26425]: I0217 15:33:44.587712 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.587879 master-0 kubenswrapper[26425]: I0217 15:33:44.587737 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.587879 master-0 kubenswrapper[26425]: I0217 15:33:44.587754 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.590275 master-0 kubenswrapper[26425]: I0217 15:33:44.589062 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:33:44.603842 master-0 kubenswrapper[26425]: I0217 15:33:44.603418 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.606116 master-0 kubenswrapper[26425]: I0217 15:33:44.592820 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:33:44.655757 master-0 kubenswrapper[26425]: I0217 15:33:44.655649 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config" (OuterVolumeSpecName: "web-config") pod "7284bcca-864c-40df-b7dc-9aecf470697a" (UID: "7284bcca-864c-40df-b7dc-9aecf470697a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.702930 26425 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.702979 26425 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.702995 26425 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703009 26425 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703023 26425 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703035 26425 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703046 26425 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7284bcca-864c-40df-b7dc-9aecf470697a-config-out\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703059 26425 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703074 26425 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703087 26425 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703099 26425 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703112 26425 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703127 26425 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-web-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703139 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vsgn\" (UniqueName: \"kubernetes.io/projected/7284bcca-864c-40df-b7dc-9aecf470697a-kube-api-access-8vsgn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703155 26425 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7284bcca-864c-40df-b7dc-9aecf470697a-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703167 26425 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703179 26425 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.703587 master-0 kubenswrapper[26425]: I0217 15:33:44.703190 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7284bcca-864c-40df-b7dc-9aecf470697a-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:33:44.874708 master-0 kubenswrapper[26425]: W0217 15:33:44.874657 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda38fb686_debe_482b_ae85_3172fd731fba.slice/crio-6f0f6eff922253435e77eabb6457430057d3a48a34d9b1826838d4828bdeab04 WatchSource:0}: Error finding container 6f0f6eff922253435e77eabb6457430057d3a48a34d9b1826838d4828bdeab04: Status 404 returned error can't find the container with id 6f0f6eff922253435e77eabb6457430057d3a48a34d9b1826838d4828bdeab04 Feb 17 15:33:44.878884 master-0 kubenswrapper[26425]: I0217 15:33:44.878841 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:33:45.319620 master-0 kubenswrapper[26425]: I0217 15:33:45.319485 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f45cc898f-z9tb2" event={"ID":"a38fb686-debe-482b-ae85-3172fd731fba","Type":"ContainerStarted","Data":"0474b8136e589e950dfdf97972c8099e9d1031f92d766013923cc056ae834926"} Feb 17 15:33:45.319620 master-0 kubenswrapper[26425]: I0217 15:33:45.319540 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f45cc898f-z9tb2" event={"ID":"a38fb686-debe-482b-ae85-3172fd731fba","Type":"ContainerStarted","Data":"6f0f6eff922253435e77eabb6457430057d3a48a34d9b1826838d4828bdeab04"} Feb 17 15:33:45.323780 master-0 kubenswrapper[26425]: I0217 15:33:45.323747 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7284bcca-864c-40df-b7dc-9aecf470697a","Type":"ContainerDied","Data":"5a690802a3c326e6a43b2e97f56648c461496fb55c540faca512821923c9d07c"} Feb 17 15:33:45.323921 master-0 kubenswrapper[26425]: I0217 15:33:45.323786 26425 scope.go:117] "RemoveContainer" containerID="3bef16d6a5c7c4c3b645d3c355aa1a41faba5d711790e01525694cbdeb738180" Feb 17 15:33:45.324155 master-0 kubenswrapper[26425]: I0217 15:33:45.324118 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.341670 master-0 kubenswrapper[26425]: I0217 15:33:45.341629 26425 scope.go:117] "RemoveContainer" containerID="c17e6e0ffb2100550235ef51822ac385fadd80df618190dad159ce0d25c6aeda" Feb 17 15:33:45.359340 master-0 kubenswrapper[26425]: I0217 15:33:45.358782 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f45cc898f-z9tb2" podStartSLOduration=2.358753878 podStartE2EDuration="2.358753878s" podCreationTimestamp="2026-02-17 15:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:33:45.348067223 +0000 UTC m=+1087.239791041" watchObservedRunningTime="2026-02-17 15:33:45.358753878 +0000 UTC m=+1087.250477726" Feb 17 15:33:45.375361 master-0 kubenswrapper[26425]: I0217 15:33:45.375289 26425 scope.go:117] "RemoveContainer" containerID="30845f09794de19ccb491a056c81a6e3440a61b00911226c4004f95138579471" Feb 17 15:33:45.384537 master-0 kubenswrapper[26425]: I0217 15:33:45.384440 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:45.396778 master-0 kubenswrapper[26425]: I0217 15:33:45.396710 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:45.408063 master-0 kubenswrapper[26425]: I0217 15:33:45.408007 26425 scope.go:117] "RemoveContainer" containerID="755bcfc2451098b86204efb1064608fc839aaba5498c364378fe3e4492975625" Feb 17 15:33:45.434690 master-0 kubenswrapper[26425]: I0217 15:33:45.434635 26425 scope.go:117] "RemoveContainer" containerID="5b880952e43c162fdf7249d632e1b7db55215a5ce8dea0be9d7f9249af484e1b" Feb 17 15:33:45.438849 master-0 kubenswrapper[26425]: I0217 15:33:45.438785 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:45.439223 master-0 kubenswrapper[26425]: E0217 15:33:45.439182 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="init-config-reloader" Feb 17 15:33:45.439223 master-0 kubenswrapper[26425]: I0217 15:33:45.439211 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="init-config-reloader" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: E0217 15:33:45.439238 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-web" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: I0217 15:33:45.439251 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-web" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: E0217 15:33:45.439273 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: I0217 15:33:45.439287 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: E0217 15:33:45.439325 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="config-reloader" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: I0217 15:33:45.439339 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="config-reloader" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: E0217 15:33:45.439361 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-thanos" Feb 17 15:33:45.439381 master-0 kubenswrapper[26425]: I0217 15:33:45.439374 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-thanos" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: E0217 15:33:45.439409 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="thanos-sidecar" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439421 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="thanos-sidecar" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: E0217 15:33:45.439440 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="prometheus" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439452 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="prometheus" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439687 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="config-reloader" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439728 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439755 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="thanos-sidecar" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439781 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-thanos" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439822 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="kube-rbac-proxy-web" Feb 17 15:33:45.439855 master-0 kubenswrapper[26425]: I0217 15:33:45.439851 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" containerName="prometheus" Feb 17 15:33:45.443380 master-0 kubenswrapper[26425]: I0217 15:33:45.443331 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.446962 master-0 kubenswrapper[26425]: I0217 15:33:45.446912 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 15:33:45.448585 master-0 kubenswrapper[26425]: I0217 15:33:45.447666 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 15:33:45.448585 master-0 kubenswrapper[26425]: I0217 15:33:45.448391 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.448619 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2tsl8" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.449357 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7d1hat1ob2dke" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.449379 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.449889 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.450006 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.450608 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.450736 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 15:33:45.454603 master-0 kubenswrapper[26425]: I0217 15:33:45.450902 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 15:33:45.459138 master-0 kubenswrapper[26425]: I0217 15:33:45.459072 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 15:33:45.464666 master-0 kubenswrapper[26425]: I0217 15:33:45.464597 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 15:33:45.467100 master-0 kubenswrapper[26425]: I0217 15:33:45.467047 26425 scope.go:117] "RemoveContainer" containerID="4447ceb23c1d4facb08760700abd426c411bbf6b4811632582d89ef957716e66" Feb 17 15:33:45.468209 master-0 kubenswrapper[26425]: I0217 15:33:45.468150 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:45.494197 master-0 kubenswrapper[26425]: I0217 15:33:45.494032 26425 scope.go:117] "RemoveContainer" containerID="658ac603e541dd9359651742b5c146fca91edeacc594e7f8c19fa744fb622d49" Feb 17 15:33:45.516858 master-0 kubenswrapper[26425]: I0217 15:33:45.516773 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.516858 master-0 kubenswrapper[26425]: I0217 15:33:45.516842 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.516878 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqcz\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-kube-api-access-csqcz\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.516920 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.516952 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.516979 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.517018 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.517058 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.517090 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.517129 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.517150 master-0 kubenswrapper[26425]: I0217 15:33:45.517152 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517175 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517201 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517222 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517250 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517273 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517295 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.518338 master-0 kubenswrapper[26425]: I0217 15:33:45.517365 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619038 master-0 kubenswrapper[26425]: I0217 15:33:45.618921 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619377 master-0 kubenswrapper[26425]: I0217 15:33:45.619131 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619377 master-0 kubenswrapper[26425]: I0217 15:33:45.619222 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619663 master-0 kubenswrapper[26425]: I0217 15:33:45.619499 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619663 master-0 kubenswrapper[26425]: I0217 15:33:45.619577 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csqcz\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-kube-api-access-csqcz\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619663 master-0 kubenswrapper[26425]: I0217 15:33:45.619632 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619714 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619748 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619797 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619839 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619883 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.619944 master-0 kubenswrapper[26425]: I0217 15:33:45.619942 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.620422 master-0 kubenswrapper[26425]: I0217 15:33:45.619975 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.620422 master-0 kubenswrapper[26425]: I0217 15:33:45.619996 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.620422 master-0 kubenswrapper[26425]: I0217 15:33:45.620238 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.620422 master-0 kubenswrapper[26425]: I0217 15:33:45.620383 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.621412 master-0 kubenswrapper[26425]: I0217 15:33:45.621348 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.621754 master-0 kubenswrapper[26425]: I0217 15:33:45.621584 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.621754 master-0 kubenswrapper[26425]: I0217 15:33:45.621693 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.621754 master-0 kubenswrapper[26425]: I0217 15:33:45.621735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.621754 master-0 kubenswrapper[26425]: I0217 15:33:45.621753 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.623140 master-0 kubenswrapper[26425]: I0217 15:33:45.623088 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.624046 master-0 kubenswrapper[26425]: I0217 15:33:45.623945 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.625490 master-0 kubenswrapper[26425]: I0217 15:33:45.625404 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.626430 master-0 kubenswrapper[26425]: I0217 15:33:45.626367 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.629746 master-0 kubenswrapper[26425]: I0217 15:33:45.627572 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.629957 master-0 kubenswrapper[26425]: I0217 15:33:45.629814 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.630452 master-0 kubenswrapper[26425]: I0217 15:33:45.630353 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.633025 master-0 kubenswrapper[26425]: I0217 15:33:45.631183 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b2b4483-ad7f-4a09-957a-acd0164558b5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.633025 master-0 kubenswrapper[26425]: I0217 15:33:45.631374 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.635084 master-0 kubenswrapper[26425]: I0217 15:33:45.635030 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.635424 master-0 kubenswrapper[26425]: I0217 15:33:45.635381 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-config\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.636575 master-0 kubenswrapper[26425]: I0217 15:33:45.636517 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.637552 master-0 kubenswrapper[26425]: I0217 15:33:45.637388 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b2b4483-ad7f-4a09-957a-acd0164558b5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.642819 master-0 kubenswrapper[26425]: I0217 15:33:45.642754 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b2b4483-ad7f-4a09-957a-acd0164558b5-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.654100 master-0 kubenswrapper[26425]: I0217 15:33:45.654030 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csqcz\" (UniqueName: \"kubernetes.io/projected/9b2b4483-ad7f-4a09-957a-acd0164558b5-kube-api-access-csqcz\") pod \"prometheus-k8s-0\" (UID: \"9b2b4483-ad7f-4a09-957a-acd0164558b5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:45.773425 master-0 kubenswrapper[26425]: I0217 15:33:45.773329 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:46.280234 master-0 kubenswrapper[26425]: I0217 15:33:46.279846 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 15:33:46.290483 master-0 kubenswrapper[26425]: W0217 15:33:46.290393 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b2b4483_ad7f_4a09_957a_acd0164558b5.slice/crio-3df755701aba141227e4475b7b61886097de134b87024aa881a157d073a22c87 WatchSource:0}: Error finding container 3df755701aba141227e4475b7b61886097de134b87024aa881a157d073a22c87: Status 404 returned error can't find the container with id 3df755701aba141227e4475b7b61886097de134b87024aa881a157d073a22c87 Feb 17 15:33:46.333923 master-0 kubenswrapper[26425]: I0217 15:33:46.333871 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"3df755701aba141227e4475b7b61886097de134b87024aa881a157d073a22c87"} Feb 17 15:33:46.414149 master-0 kubenswrapper[26425]: I0217 15:33:46.414070 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7284bcca-864c-40df-b7dc-9aecf470697a" path="/var/lib/kubelet/pods/7284bcca-864c-40df-b7dc-9aecf470697a/volumes" Feb 17 15:33:47.342955 master-0 kubenswrapper[26425]: I0217 15:33:47.342906 26425 generic.go:334] "Generic (PLEG): container finished" podID="9b2b4483-ad7f-4a09-957a-acd0164558b5" containerID="73dec2ca974982c638d163894df7de3734179af7f1be69f9beca5a32199b8e65" exitCode=0 Feb 17 15:33:47.342955 master-0 kubenswrapper[26425]: I0217 15:33:47.342955 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerDied","Data":"73dec2ca974982c638d163894df7de3734179af7f1be69f9beca5a32199b8e65"} Feb 17 15:33:48.354664 master-0 kubenswrapper[26425]: I0217 15:33:48.354591 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"6d0809a69fb6f7eca23f28e0625a468fd051106d2e2e7e05601e24d9854d8170"} Feb 17 15:33:48.354664 master-0 kubenswrapper[26425]: I0217 15:33:48.354667 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"8fd4ac36fa14584565077a50f9a5fad54c49af049c22df0554526ef93c948c02"} Feb 17 15:33:48.355187 master-0 kubenswrapper[26425]: I0217 15:33:48.354688 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"347db9bdac8a71a5de1ff2662a1215f9c1af4fbca76551700035918c5601c501"} Feb 17 15:33:48.355187 master-0 kubenswrapper[26425]: I0217 15:33:48.354703 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"3ef97edcefbdea943e4aa3176a18d3e431d094ae0a844a0ee131f15324997e33"} Feb 17 15:33:48.355187 master-0 kubenswrapper[26425]: I0217 15:33:48.354719 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"ff035a89df0f605fc28a7b28ff889ca5ade3b0b884ad8aed5b551992001e1d06"} Feb 17 15:33:48.355187 master-0 kubenswrapper[26425]: I0217 15:33:48.354734 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b2b4483-ad7f-4a09-957a-acd0164558b5","Type":"ContainerStarted","Data":"014c8820e2024e8a97f311499e8faeb589058f4605cc98dcee15cd69de347035"} Feb 17 15:33:48.385790 master-0 kubenswrapper[26425]: I0217 15:33:48.385648 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.385627142 podStartE2EDuration="3.385627142s" podCreationTimestamp="2026-02-17 15:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:33:48.384601037 +0000 UTC m=+1090.276324885" watchObservedRunningTime="2026-02-17 15:33:48.385627142 +0000 UTC m=+1090.277350970" Feb 17 15:33:50.305700 master-0 kubenswrapper[26425]: I0217 15:33:50.305625 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:50.305700 master-0 kubenswrapper[26425]: I0217 15:33:50.305697 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:33:50.307729 master-0 kubenswrapper[26425]: I0217 15:33:50.307675 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:33:50.307813 master-0 kubenswrapper[26425]: I0217 15:33:50.307744 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:33:50.774554 master-0 kubenswrapper[26425]: I0217 15:33:50.774501 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:33:54.423384 master-0 kubenswrapper[26425]: I0217 15:33:54.423218 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:54.423384 master-0 kubenswrapper[26425]: I0217 15:33:54.423318 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:33:54.425421 master-0 kubenswrapper[26425]: I0217 15:33:54.425327 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:33:54.425561 master-0 kubenswrapper[26425]: I0217 15:33:54.425438 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:00.306679 master-0 kubenswrapper[26425]: I0217 15:34:00.306581 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:00.307256 master-0 kubenswrapper[26425]: I0217 15:34:00.306714 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:03.856075 master-0 kubenswrapper[26425]: I0217 15:34:03.855946 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" podUID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" containerName="oauth-openshift" containerID="cri-o://40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7" gracePeriod=15 Feb 17 15:34:04.406648 master-0 kubenswrapper[26425]: I0217 15:34:04.406527 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:34:04.423880 master-0 kubenswrapper[26425]: I0217 15:34:04.423821 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:04.424172 master-0 kubenswrapper[26425]: I0217 15:34:04.423910 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:04.466846 master-0 kubenswrapper[26425]: I0217 15:34:04.466799 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56d478877c-mlr8b"] Feb 17 15:34:04.467358 master-0 kubenswrapper[26425]: E0217 15:34:04.467339 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" containerName="oauth-openshift" Feb 17 15:34:04.467448 master-0 kubenswrapper[26425]: I0217 15:34:04.467434 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" containerName="oauth-openshift" Feb 17 15:34:04.467735 master-0 kubenswrapper[26425]: I0217 15:34:04.467718 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" containerName="oauth-openshift" Feb 17 15:34:04.468649 master-0 kubenswrapper[26425]: I0217 15:34:04.468329 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.500339 master-0 kubenswrapper[26425]: I0217 15:34:04.500274 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500535 master-0 kubenswrapper[26425]: I0217 15:34:04.500408 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500612 master-0 kubenswrapper[26425]: I0217 15:34:04.500552 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqb67\" (UniqueName: \"kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500612 master-0 kubenswrapper[26425]: I0217 15:34:04.500600 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500710 master-0 kubenswrapper[26425]: I0217 15:34:04.500668 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500755 master-0 kubenswrapper[26425]: I0217 15:34:04.500729 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500836 master-0 kubenswrapper[26425]: I0217 15:34:04.500785 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500895 master-0 kubenswrapper[26425]: I0217 15:34:04.500839 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.500948 master-0 kubenswrapper[26425]: I0217 15:34:04.500925 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.501033 master-0 kubenswrapper[26425]: I0217 15:34:04.500998 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.501096 master-0 kubenswrapper[26425]: I0217 15:34:04.501063 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.501152 master-0 kubenswrapper[26425]: I0217 15:34:04.501106 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.501198 master-0 kubenswrapper[26425]: I0217 15:34:04.501166 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig\") pod \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\" (UID: \"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2\") " Feb 17 15:34:04.501389 master-0 kubenswrapper[26425]: I0217 15:34:04.501302 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:04.501930 master-0 kubenswrapper[26425]: I0217 15:34:04.501881 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.502103 master-0 kubenswrapper[26425]: I0217 15:34:04.502068 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:04.502229 master-0 kubenswrapper[26425]: I0217 15:34:04.502213 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:04.504640 master-0 kubenswrapper[26425]: I0217 15:34:04.504599 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.505568 master-0 kubenswrapper[26425]: I0217 15:34:04.505528 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:04.505723 master-0 kubenswrapper[26425]: I0217 15:34:04.505685 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:04.505953 master-0 kubenswrapper[26425]: I0217 15:34:04.505911 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.507330 master-0 kubenswrapper[26425]: I0217 15:34:04.507266 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.507897 master-0 kubenswrapper[26425]: I0217 15:34:04.507838 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.507997 master-0 kubenswrapper[26425]: I0217 15:34:04.507921 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67" (OuterVolumeSpecName: "kube-api-access-pqb67") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "kube-api-access-pqb67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:04.508318 master-0 kubenswrapper[26425]: I0217 15:34:04.508267 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.509292 master-0 kubenswrapper[26425]: I0217 15:34:04.509191 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.510046 master-0 kubenswrapper[26425]: I0217 15:34:04.509982 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" (UID: "d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:04.531392 master-0 kubenswrapper[26425]: I0217 15:34:04.531318 26425 generic.go:334] "Generic (PLEG): container finished" podID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" containerID="40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7" exitCode=0 Feb 17 15:34:04.531568 master-0 kubenswrapper[26425]: I0217 15:34:04.531400 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" event={"ID":"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2","Type":"ContainerDied","Data":"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7"} Feb 17 15:34:04.531568 master-0 kubenswrapper[26425]: I0217 15:34:04.531500 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" Feb 17 15:34:04.531568 master-0 kubenswrapper[26425]: I0217 15:34:04.531533 26425 scope.go:117] "RemoveContainer" containerID="40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7" Feb 17 15:34:04.531698 master-0 kubenswrapper[26425]: I0217 15:34:04.531508 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9" event={"ID":"d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2","Type":"ContainerDied","Data":"771e3b7cf2128460346c20bf4c7f139d8f2d8f3c17bc2a42c92d90885ec8ced1"} Feb 17 15:34:04.570323 master-0 kubenswrapper[26425]: I0217 15:34:04.570257 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56d478877c-mlr8b"] Feb 17 15:34:04.602791 master-0 kubenswrapper[26425]: I0217 15:34:04.602687 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6tj\" (UniqueName: \"kubernetes.io/projected/ab29aaa7-4556-4807-b400-0fab8d3e8196-kube-api-access-wx6tj\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.602791 master-0 kubenswrapper[26425]: I0217 15:34:04.602751 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-policies\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.602791 master-0 kubenswrapper[26425]: I0217 15:34:04.602780 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-error\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602820 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602853 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602878 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-dir\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602902 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602928 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602952 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-session\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.602983 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603087 master-0 kubenswrapper[26425]: I0217 15:34:04.603067 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603098 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-login\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603128 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603202 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603219 26425 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603235 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603250 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603266 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603279 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603292 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603305 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603320 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603333 26425 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603345 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqb67\" (UniqueName: \"kubernetes.io/projected/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-kube-api-access-pqb67\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.603960 master-0 kubenswrapper[26425]: I0217 15:34:04.603357 26425 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:04.618354 master-0 kubenswrapper[26425]: I0217 15:34:04.618305 26425 scope.go:117] "RemoveContainer" containerID="40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7" Feb 17 15:34:04.644273 master-0 kubenswrapper[26425]: E0217 15:34:04.644222 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7\": container with ID starting with 40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7 not found: ID does not exist" containerID="40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7" Feb 17 15:34:04.644568 master-0 kubenswrapper[26425]: I0217 15:34:04.644277 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7"} err="failed to get container status \"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7\": rpc error: code = NotFound desc = could not find container \"40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7\": container with ID starting with 40669034a4e6fef198a64ebd6dc2dcf6b58a438a697eb36011f6072d490748a7 not found: ID does not exist" Feb 17 15:34:04.672481 master-0 kubenswrapper[26425]: I0217 15:34:04.671574 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9"] Feb 17 15:34:04.692936 master-0 kubenswrapper[26425]: I0217 15:34:04.683885 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9"] Feb 17 15:34:04.704590 master-0 kubenswrapper[26425]: I0217 15:34:04.704509 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-session\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.704590 master-0 kubenswrapper[26425]: I0217 15:34:04.704585 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.704942 master-0 kubenswrapper[26425]: I0217 15:34:04.704891 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.704995 master-0 kubenswrapper[26425]: I0217 15:34:04.704944 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-login\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.704995 master-0 kubenswrapper[26425]: I0217 15:34:04.704975 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705093 master-0 kubenswrapper[26425]: I0217 15:34:04.705006 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx6tj\" (UniqueName: \"kubernetes.io/projected/ab29aaa7-4556-4807-b400-0fab8d3e8196-kube-api-access-wx6tj\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705286 master-0 kubenswrapper[26425]: I0217 15:34:04.705235 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-policies\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705366 master-0 kubenswrapper[26425]: I0217 15:34:04.705324 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-error\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705473 master-0 kubenswrapper[26425]: I0217 15:34:04.705423 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705541 master-0 kubenswrapper[26425]: I0217 15:34:04.705496 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705541 master-0 kubenswrapper[26425]: I0217 15:34:04.705526 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-dir\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705626 master-0 kubenswrapper[26425]: I0217 15:34:04.705545 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705677 master-0 kubenswrapper[26425]: I0217 15:34:04.705633 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-dir\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.705725 master-0 kubenswrapper[26425]: I0217 15:34:04.705685 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.706199 master-0 kubenswrapper[26425]: I0217 15:34:04.706158 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-audit-policies\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.706404 master-0 kubenswrapper[26425]: I0217 15:34:04.706373 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.706543 master-0 kubenswrapper[26425]: I0217 15:34:04.706522 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.706899 master-0 kubenswrapper[26425]: I0217 15:34:04.706867 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.707731 master-0 kubenswrapper[26425]: I0217 15:34:04.707680 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.707904 master-0 kubenswrapper[26425]: I0217 15:34:04.707872 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-login\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.708462 master-0 kubenswrapper[26425]: I0217 15:34:04.708386 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-error\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.708718 master-0 kubenswrapper[26425]: I0217 15:34:04.708676 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.709565 master-0 kubenswrapper[26425]: I0217 15:34:04.709529 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-session\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.710428 master-0 kubenswrapper[26425]: I0217 15:34:04.710403 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.713795 master-0 kubenswrapper[26425]: I0217 15:34:04.713758 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab29aaa7-4556-4807-b400-0fab8d3e8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.719554 master-0 kubenswrapper[26425]: I0217 15:34:04.719528 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx6tj\" (UniqueName: \"kubernetes.io/projected/ab29aaa7-4556-4807-b400-0fab8d3e8196-kube-api-access-wx6tj\") pod \"oauth-openshift-56d478877c-mlr8b\" (UID: \"ab29aaa7-4556-4807-b400-0fab8d3e8196\") " pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.805027 master-0 kubenswrapper[26425]: I0217 15:34:04.804958 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:04.988578 master-0 kubenswrapper[26425]: I0217 15:34:04.986135 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-98f66b5dc-p2gxf" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" containerID="cri-o://bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9" gracePeriod=15 Feb 17 15:34:05.257476 master-0 kubenswrapper[26425]: I0217 15:34:05.256083 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56d478877c-mlr8b"] Feb 17 15:34:05.425594 master-0 kubenswrapper[26425]: I0217 15:34:05.425480 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-98f66b5dc-p2gxf_2535f316-0ff0-4cca-9736-181406061b4e/console/0.log" Feb 17 15:34:05.425594 master-0 kubenswrapper[26425]: I0217 15:34:05.425542 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:34:05.515862 master-0 kubenswrapper[26425]: I0217 15:34:05.515808 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.515862 master-0 kubenswrapper[26425]: I0217 15:34:05.515850 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.516115 master-0 kubenswrapper[26425]: I0217 15:34:05.515905 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2mw\" (UniqueName: \"kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.516115 master-0 kubenswrapper[26425]: I0217 15:34:05.515956 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.516115 master-0 kubenswrapper[26425]: I0217 15:34:05.516010 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.516115 master-0 kubenswrapper[26425]: I0217 15:34:05.516047 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert\") pod \"2535f316-0ff0-4cca-9736-181406061b4e\" (UID: \"2535f316-0ff0-4cca-9736-181406061b4e\") " Feb 17 15:34:05.516445 master-0 kubenswrapper[26425]: I0217 15:34:05.516412 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config" (OuterVolumeSpecName: "console-config") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:05.516850 master-0 kubenswrapper[26425]: I0217 15:34:05.516784 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:05.517091 master-0 kubenswrapper[26425]: I0217 15:34:05.517062 26425 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-console-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:05.517629 master-0 kubenswrapper[26425]: I0217 15:34:05.517595 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca" (OuterVolumeSpecName: "service-ca") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:05.518919 master-0 kubenswrapper[26425]: I0217 15:34:05.518865 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:05.518996 master-0 kubenswrapper[26425]: I0217 15:34:05.518941 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:05.519788 master-0 kubenswrapper[26425]: I0217 15:34:05.519736 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw" (OuterVolumeSpecName: "kube-api-access-nn2mw") pod "2535f316-0ff0-4cca-9736-181406061b4e" (UID: "2535f316-0ff0-4cca-9736-181406061b4e"). InnerVolumeSpecName "kube-api-access-nn2mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:05.547998 master-0 kubenswrapper[26425]: I0217 15:34:05.547932 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" event={"ID":"ab29aaa7-4556-4807-b400-0fab8d3e8196","Type":"ContainerStarted","Data":"df8a315bf2cbf7852b567f28d60247ad1633a8c9b3643c9519baee0e10a2fb4f"} Feb 17 15:34:05.547998 master-0 kubenswrapper[26425]: I0217 15:34:05.548001 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" event={"ID":"ab29aaa7-4556-4807-b400-0fab8d3e8196","Type":"ContainerStarted","Data":"2b82533d5391c9a75662e5bc6e0a927240f3f3b063eb392b3cf7c14d5b5b740e"} Feb 17 15:34:05.548574 master-0 kubenswrapper[26425]: I0217 15:34:05.548477 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:05.550677 master-0 kubenswrapper[26425]: I0217 15:34:05.550622 26425 patch_prober.go:28] interesting pod/oauth-openshift-56d478877c-mlr8b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.110:6443/healthz\": dial tcp 10.128.0.110:6443: connect: connection refused" start-of-body= Feb 17 15:34:05.550791 master-0 kubenswrapper[26425]: I0217 15:34:05.550689 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" podUID="ab29aaa7-4556-4807-b400-0fab8d3e8196" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.110:6443/healthz\": dial tcp 10.128.0.110:6443: connect: connection refused" Feb 17 15:34:05.551751 master-0 kubenswrapper[26425]: I0217 15:34:05.551717 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-98f66b5dc-p2gxf_2535f316-0ff0-4cca-9736-181406061b4e/console/0.log" Feb 17 15:34:05.551838 master-0 kubenswrapper[26425]: I0217 15:34:05.551772 26425 generic.go:334] "Generic (PLEG): container finished" podID="2535f316-0ff0-4cca-9736-181406061b4e" containerID="bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9" exitCode=2 Feb 17 15:34:05.551909 master-0 kubenswrapper[26425]: I0217 15:34:05.551839 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98f66b5dc-p2gxf" event={"ID":"2535f316-0ff0-4cca-9736-181406061b4e","Type":"ContainerDied","Data":"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9"} Feb 17 15:34:05.551909 master-0 kubenswrapper[26425]: I0217 15:34:05.551871 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-98f66b5dc-p2gxf" event={"ID":"2535f316-0ff0-4cca-9736-181406061b4e","Type":"ContainerDied","Data":"b16741e26cc181e81d7e4f62aa08f75954b27a237d8c892a7e4f56cf6e6a1b53"} Feb 17 15:34:05.551909 master-0 kubenswrapper[26425]: I0217 15:34:05.551888 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-98f66b5dc-p2gxf" Feb 17 15:34:05.552054 master-0 kubenswrapper[26425]: I0217 15:34:05.551893 26425 scope.go:117] "RemoveContainer" containerID="bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9" Feb 17 15:34:05.573407 master-0 kubenswrapper[26425]: I0217 15:34:05.573294 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" podStartSLOduration=26.573274782 podStartE2EDuration="26.573274782s" podCreationTimestamp="2026-02-17 15:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:05.570737781 +0000 UTC m=+1107.462461649" watchObservedRunningTime="2026-02-17 15:34:05.573274782 +0000 UTC m=+1107.464998600" Feb 17 15:34:05.576011 master-0 kubenswrapper[26425]: I0217 15:34:05.575588 26425 scope.go:117] "RemoveContainer" containerID="bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9" Feb 17 15:34:05.576152 master-0 kubenswrapper[26425]: E0217 15:34:05.576092 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9\": container with ID starting with bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9 not found: ID does not exist" containerID="bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9" Feb 17 15:34:05.576602 master-0 kubenswrapper[26425]: I0217 15:34:05.576156 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9"} err="failed to get container status \"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9\": rpc error: code = NotFound desc = could not find container \"bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9\": container with ID starting with bba1720f1fd557abbf59f2b01fe3bbf5a7ed240257dd77c34a9908113c6362c9 not found: ID does not exist" Feb 17 15:34:05.608280 master-0 kubenswrapper[26425]: I0217 15:34:05.608223 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:34:05.613814 master-0 kubenswrapper[26425]: I0217 15:34:05.613766 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-98f66b5dc-p2gxf"] Feb 17 15:34:05.618676 master-0 kubenswrapper[26425]: I0217 15:34:05.618644 26425 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:05.618676 master-0 kubenswrapper[26425]: I0217 15:34:05.618667 26425 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:05.618676 master-0 kubenswrapper[26425]: I0217 15:34:05.618678 26425 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2535f316-0ff0-4cca-9736-181406061b4e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:05.618800 master-0 kubenswrapper[26425]: I0217 15:34:05.618688 26425 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2535f316-0ff0-4cca-9736-181406061b4e-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:05.618800 master-0 kubenswrapper[26425]: I0217 15:34:05.618698 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2mw\" (UniqueName: \"kubernetes.io/projected/2535f316-0ff0-4cca-9736-181406061b4e-kube-api-access-nn2mw\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:06.411399 master-0 kubenswrapper[26425]: I0217 15:34:06.411077 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2535f316-0ff0-4cca-9736-181406061b4e" path="/var/lib/kubelet/pods/2535f316-0ff0-4cca-9736-181406061b4e/volumes" Feb 17 15:34:06.412170 master-0 kubenswrapper[26425]: I0217 15:34:06.411922 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2" path="/var/lib/kubelet/pods/d9ebaad8-2a28-4fe7-94bd-68a5f82a1fc2/volumes" Feb 17 15:34:06.579550 master-0 kubenswrapper[26425]: I0217 15:34:06.579419 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56d478877c-mlr8b" Feb 17 15:34:08.184491 master-0 kubenswrapper[26425]: I0217 15:34:08.184397 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 17 15:34:08.185351 master-0 kubenswrapper[26425]: E0217 15:34:08.184773 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" Feb 17 15:34:08.185351 master-0 kubenswrapper[26425]: I0217 15:34:08.184791 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" Feb 17 15:34:08.185351 master-0 kubenswrapper[26425]: I0217 15:34:08.185181 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="2535f316-0ff0-4cca-9736-181406061b4e" containerName="console" Feb 17 15:34:08.185940 master-0 kubenswrapper[26425]: I0217 15:34:08.185890 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.188717 master-0 kubenswrapper[26425]: I0217 15:34:08.188652 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 17 15:34:08.189035 master-0 kubenswrapper[26425]: I0217 15:34:08.188982 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-qt5n5" Feb 17 15:34:08.201614 master-0 kubenswrapper[26425]: I0217 15:34:08.201526 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 17 15:34:08.275775 master-0 kubenswrapper[26425]: I0217 15:34:08.275667 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.275775 master-0 kubenswrapper[26425]: I0217 15:34:08.275758 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.276296 master-0 kubenswrapper[26425]: I0217 15:34:08.276111 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.378514 master-0 kubenswrapper[26425]: I0217 15:34:08.378384 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.378979 master-0 kubenswrapper[26425]: I0217 15:34:08.378594 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.378979 master-0 kubenswrapper[26425]: I0217 15:34:08.378704 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.379155 master-0 kubenswrapper[26425]: I0217 15:34:08.379065 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.379282 master-0 kubenswrapper[26425]: I0217 15:34:08.379252 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.424915 master-0 kubenswrapper[26425]: I0217 15:34:08.424799 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.531525 master-0 kubenswrapper[26425]: I0217 15:34:08.531373 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:08.786094 master-0 kubenswrapper[26425]: I0217 15:34:08.785954 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:34:08.786276 master-0 kubenswrapper[26425]: E0217 15:34:08.786255 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:34:08.786341 master-0 kubenswrapper[26425]: E0217 15:34:08.786286 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:34:08.786386 master-0 kubenswrapper[26425]: E0217 15:34:08.786373 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:36:10.786350947 +0000 UTC m=+1232.678074775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:34:08.840107 master-0 kubenswrapper[26425]: I0217 15:34:08.840050 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 17 15:34:08.842168 master-0 kubenswrapper[26425]: I0217 15:34:08.842135 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:08.844235 master-0 kubenswrapper[26425]: I0217 15:34:08.844208 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:34:08.850386 master-0 kubenswrapper[26425]: I0217 15:34:08.845652 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:34:08.850386 master-0 kubenswrapper[26425]: I0217 15:34:08.847971 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 17 15:34:08.912532 master-0 kubenswrapper[26425]: I0217 15:34:08.909749 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-86d4dfb9dd-rz6cj" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" containerID="cri-o://6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829" gracePeriod=15 Feb 17 15:34:08.989653 master-0 kubenswrapper[26425]: I0217 15:34:08.989577 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:08.989653 master-0 kubenswrapper[26425]: I0217 15:34:08.989664 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:08.990079 master-0 kubenswrapper[26425]: I0217 15:34:08.989695 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.012626 master-0 kubenswrapper[26425]: I0217 15:34:09.012565 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Feb 17 15:34:09.090623 master-0 kubenswrapper[26425]: I0217 15:34:09.090549 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.090918 master-0 kubenswrapper[26425]: I0217 15:34:09.090714 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.090918 master-0 kubenswrapper[26425]: I0217 15:34:09.090780 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.090918 master-0 kubenswrapper[26425]: I0217 15:34:09.090876 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.091258 master-0 kubenswrapper[26425]: I0217 15:34:09.090924 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.107549 master-0 kubenswrapper[26425]: I0217 15:34:09.106621 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.175635 master-0 kubenswrapper[26425]: I0217 15:34:09.175509 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:09.338896 master-0 kubenswrapper[26425]: I0217 15:34:09.338843 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86d4dfb9dd-rz6cj_90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f/console/0.log" Feb 17 15:34:09.339315 master-0 kubenswrapper[26425]: I0217 15:34:09.338911 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:34:09.496624 master-0 kubenswrapper[26425]: I0217 15:34:09.496580 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.497403 master-0 kubenswrapper[26425]: I0217 15:34:09.497387 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.497537 master-0 kubenswrapper[26425]: I0217 15:34:09.497521 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.497650 master-0 kubenswrapper[26425]: I0217 15:34:09.497636 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgj4r\" (UniqueName: \"kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.497757 master-0 kubenswrapper[26425]: I0217 15:34:09.497742 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.497870 master-0 kubenswrapper[26425]: I0217 15:34:09.497854 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.498001 master-0 kubenswrapper[26425]: I0217 15:34:09.497987 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca\") pod \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\" (UID: \"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f\") " Feb 17 15:34:09.499364 master-0 kubenswrapper[26425]: I0217 15:34:09.497320 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:09.500375 master-0 kubenswrapper[26425]: I0217 15:34:09.500261 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:09.500375 master-0 kubenswrapper[26425]: I0217 15:34:09.500273 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca" (OuterVolumeSpecName: "service-ca") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:09.500375 master-0 kubenswrapper[26425]: I0217 15:34:09.500331 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config" (OuterVolumeSpecName: "console-config") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:34:09.502157 master-0 kubenswrapper[26425]: I0217 15:34:09.502126 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:09.502790 master-0 kubenswrapper[26425]: I0217 15:34:09.502569 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r" (OuterVolumeSpecName: "kube-api-access-xgj4r") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "kube-api-access-xgj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:09.503115 master-0 kubenswrapper[26425]: I0217 15:34:09.503095 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" (UID: "90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599171 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"a1e08089-c2b7-40db-91c7-9bec122b227e","Type":"ContainerStarted","Data":"38692b28783da5faa3440729f9a9564a0e14f831b28788b7cc1c3bb0cf87edcb"} Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599231 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"a1e08089-c2b7-40db-91c7-9bec122b227e","Type":"ContainerStarted","Data":"7007b4f00f07475f9d3ca30ce92bbcbb45c7137b38d528589218abfe7b28d698"} Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599849 26425 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599895 26425 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599907 26425 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.599920 master-0 kubenswrapper[26425]: I0217 15:34:09.599917 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgj4r\" (UniqueName: \"kubernetes.io/projected/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-kube-api-access-xgj4r\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.600438 master-0 kubenswrapper[26425]: I0217 15:34:09.599931 26425 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.600438 master-0 kubenswrapper[26425]: I0217 15:34:09.599949 26425 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-console-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.600438 master-0 kubenswrapper[26425]: I0217 15:34:09.599964 26425 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:09.602008 master-0 kubenswrapper[26425]: I0217 15:34:09.601823 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86d4dfb9dd-rz6cj_90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f/console/0.log" Feb 17 15:34:09.602008 master-0 kubenswrapper[26425]: I0217 15:34:09.601907 26425 generic.go:334] "Generic (PLEG): container finished" podID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerID="6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829" exitCode=2 Feb 17 15:34:09.602008 master-0 kubenswrapper[26425]: I0217 15:34:09.601970 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d4dfb9dd-rz6cj" event={"ID":"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f","Type":"ContainerDied","Data":"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829"} Feb 17 15:34:09.602008 master-0 kubenswrapper[26425]: I0217 15:34:09.601988 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d4dfb9dd-rz6cj" Feb 17 15:34:09.602234 master-0 kubenswrapper[26425]: I0217 15:34:09.602023 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d4dfb9dd-rz6cj" event={"ID":"90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f","Type":"ContainerDied","Data":"e6fe88c0b99e2c4c35d64c324497a6422afd688d4fa9aff82e8e04c1cbc8087b"} Feb 17 15:34:09.602234 master-0 kubenswrapper[26425]: I0217 15:34:09.602047 26425 scope.go:117] "RemoveContainer" containerID="6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829" Feb 17 15:34:09.618322 master-0 kubenswrapper[26425]: I0217 15:34:09.618229 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=1.618210895 podStartE2EDuration="1.618210895s" podCreationTimestamp="2026-02-17 15:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:09.616285368 +0000 UTC m=+1111.508009236" watchObservedRunningTime="2026-02-17 15:34:09.618210895 +0000 UTC m=+1111.509934713" Feb 17 15:34:09.624629 master-0 kubenswrapper[26425]: I0217 15:34:09.624585 26425 scope.go:117] "RemoveContainer" containerID="6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829" Feb 17 15:34:09.625128 master-0 kubenswrapper[26425]: E0217 15:34:09.625084 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829\": container with ID starting with 6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829 not found: ID does not exist" containerID="6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829" Feb 17 15:34:09.625487 master-0 kubenswrapper[26425]: I0217 15:34:09.625130 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829"} err="failed to get container status \"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829\": rpc error: code = NotFound desc = could not find container \"6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829\": container with ID starting with 6929c9664044cb06893d99bcea199e0e6eb611076c45dfcb4b0d70a905f76829 not found: ID does not exist" Feb 17 15:34:09.638305 master-0 kubenswrapper[26425]: I0217 15:34:09.634943 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 17 15:34:09.640685 master-0 kubenswrapper[26425]: W0217 15:34:09.640623 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3109bbc1_f0f3_4d9a_8438_61ebf59b402b.slice/crio-845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6 WatchSource:0}: Error finding container 845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6: Status 404 returned error can't find the container with id 845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6 Feb 17 15:34:09.650563 master-0 kubenswrapper[26425]: I0217 15:34:09.650488 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:34:09.657197 master-0 kubenswrapper[26425]: I0217 15:34:09.656546 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-86d4dfb9dd-rz6cj"] Feb 17 15:34:10.306550 master-0 kubenswrapper[26425]: I0217 15:34:10.306304 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:10.306550 master-0 kubenswrapper[26425]: I0217 15:34:10.306396 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:10.417098 master-0 kubenswrapper[26425]: I0217 15:34:10.417027 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" path="/var/lib/kubelet/pods/90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f/volumes" Feb 17 15:34:10.620239 master-0 kubenswrapper[26425]: I0217 15:34:10.620159 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"3109bbc1-f0f3-4d9a-8438-61ebf59b402b","Type":"ContainerStarted","Data":"46fb529ffb8fb29babb31d5b8fd8c50ccbb69d0ad39c3fc9027ef2dd0962d205"} Feb 17 15:34:10.620239 master-0 kubenswrapper[26425]: I0217 15:34:10.620232 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"3109bbc1-f0f3-4d9a-8438-61ebf59b402b","Type":"ContainerStarted","Data":"845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6"} Feb 17 15:34:10.652232 master-0 kubenswrapper[26425]: I0217 15:34:10.652093 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.652067261 podStartE2EDuration="2.652067261s" podCreationTimestamp="2026-02-17 15:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:10.644002648 +0000 UTC m=+1112.535726556" watchObservedRunningTime="2026-02-17 15:34:10.652067261 +0000 UTC m=+1112.543791119" Feb 17 15:34:14.423539 master-0 kubenswrapper[26425]: I0217 15:34:14.423436 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:14.424658 master-0 kubenswrapper[26425]: I0217 15:34:14.423562 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:17.212006 master-0 kubenswrapper[26425]: I0217 15:34:17.211914 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:17.212683 master-0 kubenswrapper[26425]: E0217 15:34:17.212350 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" Feb 17 15:34:17.212683 master-0 kubenswrapper[26425]: I0217 15:34:17.212372 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" Feb 17 15:34:17.212773 master-0 kubenswrapper[26425]: I0217 15:34:17.212733 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f2de1c-3fe7-4fd4-9f0e-7e1995b8ef7f" containerName="console" Feb 17 15:34:17.215042 master-0 kubenswrapper[26425]: I0217 15:34:17.214981 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.220767 master-0 kubenswrapper[26425]: I0217 15:34:17.220690 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-87grw" Feb 17 15:34:17.221259 master-0 kubenswrapper[26425]: I0217 15:34:17.221164 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:34:17.229635 master-0 kubenswrapper[26425]: I0217 15:34:17.228313 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:17.344330 master-0 kubenswrapper[26425]: I0217 15:34:17.344249 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.344611 master-0 kubenswrapper[26425]: I0217 15:34:17.344498 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.344611 master-0 kubenswrapper[26425]: I0217 15:34:17.344592 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.446195 master-0 kubenswrapper[26425]: I0217 15:34:17.446104 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.446195 master-0 kubenswrapper[26425]: I0217 15:34:17.446204 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.446692 master-0 kubenswrapper[26425]: I0217 15:34:17.446363 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.446692 master-0 kubenswrapper[26425]: I0217 15:34:17.446518 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.446927 master-0 kubenswrapper[26425]: I0217 15:34:17.446708 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.466894 master-0 kubenswrapper[26425]: I0217 15:34:17.466733 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:17.543875 master-0 kubenswrapper[26425]: I0217 15:34:17.543783 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:18.080017 master-0 kubenswrapper[26425]: I0217 15:34:18.079959 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:18.087827 master-0 kubenswrapper[26425]: W0217 15:34:18.087767 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7ada525a_db93_45c8_bd0b_985245018f61.slice/crio-55a84038a82cbab0c55e1feb470b790a98a80f860721546bbeefe21b347e67cb WatchSource:0}: Error finding container 55a84038a82cbab0c55e1feb470b790a98a80f860721546bbeefe21b347e67cb: Status 404 returned error can't find the container with id 55a84038a82cbab0c55e1feb470b790a98a80f860721546bbeefe21b347e67cb Feb 17 15:34:18.735602 master-0 kubenswrapper[26425]: I0217 15:34:18.735320 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7ada525a-db93-45c8-bd0b-985245018f61","Type":"ContainerStarted","Data":"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61"} Feb 17 15:34:18.735602 master-0 kubenswrapper[26425]: I0217 15:34:18.735378 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7ada525a-db93-45c8-bd0b-985245018f61","Type":"ContainerStarted","Data":"55a84038a82cbab0c55e1feb470b790a98a80f860721546bbeefe21b347e67cb"} Feb 17 15:34:18.762383 master-0 kubenswrapper[26425]: I0217 15:34:18.761951 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.7619218760000002 podStartE2EDuration="1.761921876s" podCreationTimestamp="2026-02-17 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:18.761817993 +0000 UTC m=+1120.653541841" watchObservedRunningTime="2026-02-17 15:34:18.761921876 +0000 UTC m=+1120.653645764" Feb 17 15:34:20.306295 master-0 kubenswrapper[26425]: I0217 15:34:20.306183 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:20.306295 master-0 kubenswrapper[26425]: I0217 15:34:20.306269 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:22.613781 master-0 kubenswrapper[26425]: I0217 15:34:22.613691 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:22.614745 master-0 kubenswrapper[26425]: I0217 15:34:22.614008 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="7ada525a-db93-45c8-bd0b-985245018f61" containerName="installer" containerID="cri-o://ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61" gracePeriod=30 Feb 17 15:34:24.424245 master-0 kubenswrapper[26425]: I0217 15:34:24.424182 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:24.424791 master-0 kubenswrapper[26425]: I0217 15:34:24.424258 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:25.809920 master-0 kubenswrapper[26425]: I0217 15:34:25.809807 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 17 15:34:25.812405 master-0 kubenswrapper[26425]: I0217 15:34:25.811840 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.845921 master-0 kubenswrapper[26425]: I0217 15:34:25.845829 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 17 15:34:25.896194 master-0 kubenswrapper[26425]: I0217 15:34:25.896070 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.896727 master-0 kubenswrapper[26425]: I0217 15:34:25.896671 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.897043 master-0 kubenswrapper[26425]: I0217 15:34:25.896938 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.999343 master-0 kubenswrapper[26425]: I0217 15:34:25.999190 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.999712 master-0 kubenswrapper[26425]: I0217 15:34:25.999359 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.999712 master-0 kubenswrapper[26425]: I0217 15:34:25.999408 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.999712 master-0 kubenswrapper[26425]: I0217 15:34:25.999625 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:25.999996 master-0 kubenswrapper[26425]: I0217 15:34:25.999784 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:26.030076 master-0 kubenswrapper[26425]: I0217 15:34:26.029983 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access\") pod \"installer-6-master-0\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:26.174557 master-0 kubenswrapper[26425]: I0217 15:34:26.174371 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:34:26.694199 master-0 kubenswrapper[26425]: I0217 15:34:26.694139 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 17 15:34:26.832331 master-0 kubenswrapper[26425]: I0217 15:34:26.832220 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"a34b86e7-e7af-492c-86d6-95fc9155d958","Type":"ContainerStarted","Data":"1312f80b907b6a6578225b78957503b5e0d262b74c08ff0c26d3c261eb860767"} Feb 17 15:34:27.844829 master-0 kubenswrapper[26425]: I0217 15:34:27.844691 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"a34b86e7-e7af-492c-86d6-95fc9155d958","Type":"ContainerStarted","Data":"317109f7b69d5435c410ad9bff4b0cfd044f78c87fa10d0cd8df62649fb6d9f4"} Feb 17 15:34:27.877801 master-0 kubenswrapper[26425]: I0217 15:34:27.877639 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.877607607 podStartE2EDuration="2.877607607s" podCreationTimestamp="2026-02-17 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:27.872914414 +0000 UTC m=+1129.764638292" watchObservedRunningTime="2026-02-17 15:34:27.877607607 +0000 UTC m=+1129.769331465" Feb 17 15:34:30.306264 master-0 kubenswrapper[26425]: I0217 15:34:30.306197 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:30.307598 master-0 kubenswrapper[26425]: I0217 15:34:30.307545 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:34.423516 master-0 kubenswrapper[26425]: I0217 15:34:34.423440 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:34.424300 master-0 kubenswrapper[26425]: I0217 15:34:34.423518 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:40.305919 master-0 kubenswrapper[26425]: I0217 15:34:40.305835 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:40.307138 master-0 kubenswrapper[26425]: I0217 15:34:40.305970 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:40.414156 master-0 kubenswrapper[26425]: I0217 15:34:40.414084 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:34:40.414773 master-0 kubenswrapper[26425]: I0217 15:34:40.414529 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-cert-syncer" containerID="cri-o://f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d" gracePeriod=30 Feb 17 15:34:40.414890 master-0 kubenswrapper[26425]: I0217 15:34:40.414769 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" containerID="cri-o://921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449" gracePeriod=30 Feb 17 15:34:40.414890 master-0 kubenswrapper[26425]: I0217 15:34:40.414816 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" containerID="cri-o://ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b" gracePeriod=30 Feb 17 15:34:40.417205 master-0 kubenswrapper[26425]: I0217 15:34:40.417101 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:34:40.417552 master-0 kubenswrapper[26425]: E0217 15:34:40.417514 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.417552 master-0 kubenswrapper[26425]: I0217 15:34:40.417544 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: E0217 15:34:40.417568 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: I0217 15:34:40.417581 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: E0217 15:34:40.417603 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: I0217 15:34:40.417617 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: E0217 15:34:40.417656 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="wait-for-host-port" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: I0217 15:34:40.417668 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="wait-for-host-port" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: E0217 15:34:40.417712 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-cert-syncer" Feb 17 15:34:40.417865 master-0 kubenswrapper[26425]: I0217 15:34:40.417725 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-cert-syncer" Feb 17 15:34:40.418539 master-0 kubenswrapper[26425]: I0217 15:34:40.417983 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.418539 master-0 kubenswrapper[26425]: I0217 15:34:40.418016 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-cert-syncer" Feb 17 15:34:40.418539 master-0 kubenswrapper[26425]: I0217 15:34:40.418039 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.418539 master-0 kubenswrapper[26425]: E0217 15:34:40.418352 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.418539 master-0 kubenswrapper[26425]: I0217 15:34:40.418381 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.418884 master-0 kubenswrapper[26425]: I0217 15:34:40.418745 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler-recovery-controller" Feb 17 15:34:40.418884 master-0 kubenswrapper[26425]: I0217 15:34:40.418777 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="952766c3a88fd12345a552f1277199f9" containerName="kube-scheduler" Feb 17 15:34:40.605140 master-0 kubenswrapper[26425]: I0217 15:34:40.605040 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.605410 master-0 kubenswrapper[26425]: I0217 15:34:40.605326 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.610038 master-0 kubenswrapper[26425]: I0217 15:34:40.609981 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-cert-syncer/0.log" Feb 17 15:34:40.611184 master-0 kubenswrapper[26425]: I0217 15:34:40.611129 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler/0.log" Feb 17 15:34:40.612721 master-0 kubenswrapper[26425]: I0217 15:34:40.612079 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.619771 master-0 kubenswrapper[26425]: I0217 15:34:40.619708 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="952766c3a88fd12345a552f1277199f9" podUID="18675e97311741112924c894ff03f2b2" Feb 17 15:34:40.706387 master-0 kubenswrapper[26425]: I0217 15:34:40.706322 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"952766c3a88fd12345a552f1277199f9\" (UID: \"952766c3a88fd12345a552f1277199f9\") " Feb 17 15:34:40.706711 master-0 kubenswrapper[26425]: I0217 15:34:40.706519 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"952766c3a88fd12345a552f1277199f9\" (UID: \"952766c3a88fd12345a552f1277199f9\") " Feb 17 15:34:40.706711 master-0 kubenswrapper[26425]: I0217 15:34:40.706561 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "952766c3a88fd12345a552f1277199f9" (UID: "952766c3a88fd12345a552f1277199f9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:40.706890 master-0 kubenswrapper[26425]: I0217 15:34:40.706705 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "952766c3a88fd12345a552f1277199f9" (UID: "952766c3a88fd12345a552f1277199f9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:40.707215 master-0 kubenswrapper[26425]: I0217 15:34:40.707155 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.707312 master-0 kubenswrapper[26425]: I0217 15:34:40.707270 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.707388 master-0 kubenswrapper[26425]: I0217 15:34:40.707347 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:40.707388 master-0 kubenswrapper[26425]: I0217 15:34:40.707268 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.707570 master-0 kubenswrapper[26425]: I0217 15:34:40.707372 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:40.707570 master-0 kubenswrapper[26425]: I0217 15:34:40.707371 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18675e97311741112924c894ff03f2b2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"18675e97311741112924c894ff03f2b2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.990196 master-0 kubenswrapper[26425]: I0217 15:34:40.990136 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-cert-syncer/0.log" Feb 17 15:34:40.991109 master-0 kubenswrapper[26425]: I0217 15:34:40.991072 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler/0.log" Feb 17 15:34:40.991895 master-0 kubenswrapper[26425]: I0217 15:34:40.991846 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449" exitCode=0 Feb 17 15:34:40.991895 master-0 kubenswrapper[26425]: I0217 15:34:40.991896 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b" exitCode=0 Feb 17 15:34:40.992082 master-0 kubenswrapper[26425]: I0217 15:34:40.991915 26425 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d" exitCode=2 Feb 17 15:34:40.992082 master-0 kubenswrapper[26425]: I0217 15:34:40.991972 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:40.992082 master-0 kubenswrapper[26425]: I0217 15:34:40.992034 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a" Feb 17 15:34:40.992278 master-0 kubenswrapper[26425]: I0217 15:34:40.992088 26425 scope.go:117] "RemoveContainer" containerID="5591dc378b699313a005026d26c38a2b4e16d14b25114eea56b910683dfe3933" Feb 17 15:34:40.995687 master-0 kubenswrapper[26425]: I0217 15:34:40.995636 26425 generic.go:334] "Generic (PLEG): container finished" podID="a1e08089-c2b7-40db-91c7-9bec122b227e" containerID="38692b28783da5faa3440729f9a9564a0e14f831b28788b7cc1c3bb0cf87edcb" exitCode=0 Feb 17 15:34:40.995854 master-0 kubenswrapper[26425]: I0217 15:34:40.995721 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"a1e08089-c2b7-40db-91c7-9bec122b227e","Type":"ContainerDied","Data":"38692b28783da5faa3440729f9a9564a0e14f831b28788b7cc1c3bb0cf87edcb"} Feb 17 15:34:40.996382 master-0 kubenswrapper[26425]: I0217 15:34:40.996332 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="952766c3a88fd12345a552f1277199f9" podUID="18675e97311741112924c894ff03f2b2" Feb 17 15:34:41.039549 master-0 kubenswrapper[26425]: I0217 15:34:41.039487 26425 scope.go:117] "RemoveContainer" containerID="21c7989a4696fed50634740602b415534cf6eda5f4caedd9c5df524bd3173387" Feb 17 15:34:41.096176 master-0 kubenswrapper[26425]: I0217 15:34:41.096109 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="952766c3a88fd12345a552f1277199f9" podUID="18675e97311741112924c894ff03f2b2" Feb 17 15:34:42.018394 master-0 kubenswrapper[26425]: I0217 15:34:42.018309 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-cert-syncer/0.log" Feb 17 15:34:42.404228 master-0 kubenswrapper[26425]: I0217 15:34:42.404156 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952766c3a88fd12345a552f1277199f9" path="/var/lib/kubelet/pods/952766c3a88fd12345a552f1277199f9/volumes" Feb 17 15:34:42.469758 master-0 kubenswrapper[26425]: I0217 15:34:42.469693 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:42.549632 master-0 kubenswrapper[26425]: I0217 15:34:42.549506 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access\") pod \"a1e08089-c2b7-40db-91c7-9bec122b227e\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " Feb 17 15:34:42.549632 master-0 kubenswrapper[26425]: I0217 15:34:42.549584 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock\") pod \"a1e08089-c2b7-40db-91c7-9bec122b227e\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " Feb 17 15:34:42.549927 master-0 kubenswrapper[26425]: I0217 15:34:42.549717 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir\") pod \"a1e08089-c2b7-40db-91c7-9bec122b227e\" (UID: \"a1e08089-c2b7-40db-91c7-9bec122b227e\") " Feb 17 15:34:42.550110 master-0 kubenswrapper[26425]: I0217 15:34:42.550067 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a1e08089-c2b7-40db-91c7-9bec122b227e" (UID: "a1e08089-c2b7-40db-91c7-9bec122b227e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:42.550226 master-0 kubenswrapper[26425]: I0217 15:34:42.550161 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a1e08089-c2b7-40db-91c7-9bec122b227e" (UID: "a1e08089-c2b7-40db-91c7-9bec122b227e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:42.550604 master-0 kubenswrapper[26425]: I0217 15:34:42.550556 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:42.550604 master-0 kubenswrapper[26425]: I0217 15:34:42.550596 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a1e08089-c2b7-40db-91c7-9bec122b227e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:42.556664 master-0 kubenswrapper[26425]: I0217 15:34:42.553098 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a1e08089-c2b7-40db-91c7-9bec122b227e" (UID: "a1e08089-c2b7-40db-91c7-9bec122b227e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:42.652607 master-0 kubenswrapper[26425]: I0217 15:34:42.652413 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e08089-c2b7-40db-91c7-9bec122b227e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:42.724381 master-0 kubenswrapper[26425]: I0217 15:34:42.724299 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:34:42.724790 master-0 kubenswrapper[26425]: I0217 15:34:42.724733 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager" containerID="cri-o://a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e" gracePeriod=30 Feb 17 15:34:42.724949 master-0 kubenswrapper[26425]: I0217 15:34:42.724901 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="cluster-policy-controller" containerID="cri-o://2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d" gracePeriod=30 Feb 17 15:34:42.725040 master-0 kubenswrapper[26425]: I0217 15:34:42.724924 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009" gracePeriod=30 Feb 17 15:34:42.725299 master-0 kubenswrapper[26425]: I0217 15:34:42.725196 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6" gracePeriod=30 Feb 17 15:34:42.726204 master-0 kubenswrapper[26425]: I0217 15:34:42.726177 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:34:42.726771 master-0 kubenswrapper[26425]: E0217 15:34:42.726715 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-recovery-controller" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: I0217 15:34:42.726767 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-recovery-controller" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: E0217 15:34:42.726804 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="cluster-policy-controller" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: I0217 15:34:42.726822 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="cluster-policy-controller" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: E0217 15:34:42.726853 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-cert-syncer" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: I0217 15:34:42.726871 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-cert-syncer" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: E0217 15:34:42.726907 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e08089-c2b7-40db-91c7-9bec122b227e" containerName="installer" Feb 17 15:34:42.726963 master-0 kubenswrapper[26425]: I0217 15:34:42.726924 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e08089-c2b7-40db-91c7-9bec122b227e" containerName="installer" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: E0217 15:34:42.727016 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727036 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727311 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e08089-c2b7-40db-91c7-9bec122b227e" containerName="installer" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727350 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-recovery-controller" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727408 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="cluster-policy-controller" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727435 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager" Feb 17 15:34:42.727835 master-0 kubenswrapper[26425]: I0217 15:34:42.727489 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22abb517ba13d9db4b0c15e80ada3fe" containerName="kube-controller-manager-cert-syncer" Feb 17 15:34:42.857175 master-0 kubenswrapper[26425]: I0217 15:34:42.857088 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:42.857886 master-0 kubenswrapper[26425]: I0217 15:34:42.857790 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:42.959595 master-0 kubenswrapper[26425]: I0217 15:34:42.959489 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:42.959794 master-0 kubenswrapper[26425]: I0217 15:34:42.959750 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:42.959794 master-0 kubenswrapper[26425]: I0217 15:34:42.959759 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:42.960065 master-0 kubenswrapper[26425]: I0217 15:34:42.959851 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:43.017786 master-0 kubenswrapper[26425]: I0217 15:34:43.017709 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c22abb517ba13d9db4b0c15e80ada3fe/kube-controller-manager-cert-syncer/0.log" Feb 17 15:34:43.019306 master-0 kubenswrapper[26425]: I0217 15:34:43.019253 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:43.024773 master-0 kubenswrapper[26425]: I0217 15:34:43.024705 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c22abb517ba13d9db4b0c15e80ada3fe" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" Feb 17 15:34:43.036773 master-0 kubenswrapper[26425]: I0217 15:34:43.036697 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c22abb517ba13d9db4b0c15e80ada3fe/kube-controller-manager-cert-syncer/0.log" Feb 17 15:34:43.038573 master-0 kubenswrapper[26425]: I0217 15:34:43.038514 26425 generic.go:334] "Generic (PLEG): container finished" podID="c22abb517ba13d9db4b0c15e80ada3fe" containerID="83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6" exitCode=0 Feb 17 15:34:43.038663 master-0 kubenswrapper[26425]: I0217 15:34:43.038615 26425 generic.go:334] "Generic (PLEG): container finished" podID="c22abb517ba13d9db4b0c15e80ada3fe" containerID="a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009" exitCode=2 Feb 17 15:34:43.038717 master-0 kubenswrapper[26425]: I0217 15:34:43.038646 26425 generic.go:334] "Generic (PLEG): container finished" podID="c22abb517ba13d9db4b0c15e80ada3fe" containerID="2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d" exitCode=0 Feb 17 15:34:43.038762 master-0 kubenswrapper[26425]: I0217 15:34:43.038714 26425 generic.go:334] "Generic (PLEG): container finished" podID="c22abb517ba13d9db4b0c15e80ada3fe" containerID="a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e" exitCode=0 Feb 17 15:34:43.038805 master-0 kubenswrapper[26425]: I0217 15:34:43.038618 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:43.038971 master-0 kubenswrapper[26425]: I0217 15:34:43.038927 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd" Feb 17 15:34:43.043907 master-0 kubenswrapper[26425]: I0217 15:34:43.043793 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c22abb517ba13d9db4b0c15e80ada3fe" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" Feb 17 15:34:43.044047 master-0 kubenswrapper[26425]: I0217 15:34:43.043832 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"a1e08089-c2b7-40db-91c7-9bec122b227e","Type":"ContainerDied","Data":"7007b4f00f07475f9d3ca30ce92bbcbb45c7137b38d528589218abfe7b28d698"} Feb 17 15:34:43.044047 master-0 kubenswrapper[26425]: I0217 15:34:43.043965 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7007b4f00f07475f9d3ca30ce92bbcbb45c7137b38d528589218abfe7b28d698" Feb 17 15:34:43.044213 master-0 kubenswrapper[26425]: I0217 15:34:43.043915 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Feb 17 15:34:43.048810 master-0 kubenswrapper[26425]: I0217 15:34:43.048744 26425 generic.go:334] "Generic (PLEG): container finished" podID="3109bbc1-f0f3-4d9a-8438-61ebf59b402b" containerID="46fb529ffb8fb29babb31d5b8fd8c50ccbb69d0ad39c3fc9027ef2dd0962d205" exitCode=0 Feb 17 15:34:43.048940 master-0 kubenswrapper[26425]: I0217 15:34:43.048814 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"3109bbc1-f0f3-4d9a-8438-61ebf59b402b","Type":"ContainerDied","Data":"46fb529ffb8fb29babb31d5b8fd8c50ccbb69d0ad39c3fc9027ef2dd0962d205"} Feb 17 15:34:43.162877 master-0 kubenswrapper[26425]: I0217 15:34:43.162780 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir\") pod \"c22abb517ba13d9db4b0c15e80ada3fe\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " Feb 17 15:34:43.163136 master-0 kubenswrapper[26425]: I0217 15:34:43.163024 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir\") pod \"c22abb517ba13d9db4b0c15e80ada3fe\" (UID: \"c22abb517ba13d9db4b0c15e80ada3fe\") " Feb 17 15:34:43.163210 master-0 kubenswrapper[26425]: I0217 15:34:43.163159 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c22abb517ba13d9db4b0c15e80ada3fe" (UID: "c22abb517ba13d9db4b0c15e80ada3fe"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:43.163319 master-0 kubenswrapper[26425]: I0217 15:34:43.163258 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c22abb517ba13d9db4b0c15e80ada3fe" (UID: "c22abb517ba13d9db4b0c15e80ada3fe"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:43.163721 master-0 kubenswrapper[26425]: I0217 15:34:43.163674 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:43.163721 master-0 kubenswrapper[26425]: I0217 15:34:43.163712 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c22abb517ba13d9db4b0c15e80ada3fe-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:43.372571 master-0 kubenswrapper[26425]: I0217 15:34:43.372495 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c22abb517ba13d9db4b0c15e80ada3fe" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" Feb 17 15:34:44.409551 master-0 kubenswrapper[26425]: I0217 15:34:44.409429 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22abb517ba13d9db4b0c15e80ada3fe" path="/var/lib/kubelet/pods/c22abb517ba13d9db4b0c15e80ada3fe/volumes" Feb 17 15:34:44.423649 master-0 kubenswrapper[26425]: I0217 15:34:44.423576 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:44.423847 master-0 kubenswrapper[26425]: I0217 15:34:44.423661 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:44.506020 master-0 kubenswrapper[26425]: I0217 15:34:44.505582 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:44.597752 master-0 kubenswrapper[26425]: I0217 15:34:44.597689 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir\") pod \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " Feb 17 15:34:44.597986 master-0 kubenswrapper[26425]: I0217 15:34:44.597794 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock\") pod \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " Feb 17 15:34:44.597986 master-0 kubenswrapper[26425]: I0217 15:34:44.597863 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access\") pod \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\" (UID: \"3109bbc1-f0f3-4d9a-8438-61ebf59b402b\") " Feb 17 15:34:44.598108 master-0 kubenswrapper[26425]: I0217 15:34:44.597851 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3109bbc1-f0f3-4d9a-8438-61ebf59b402b" (UID: "3109bbc1-f0f3-4d9a-8438-61ebf59b402b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:44.598212 master-0 kubenswrapper[26425]: I0217 15:34:44.597879 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock" (OuterVolumeSpecName: "var-lock") pod "3109bbc1-f0f3-4d9a-8438-61ebf59b402b" (UID: "3109bbc1-f0f3-4d9a-8438-61ebf59b402b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:44.598539 master-0 kubenswrapper[26425]: I0217 15:34:44.598441 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:44.598590 master-0 kubenswrapper[26425]: I0217 15:34:44.598542 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:44.600619 master-0 kubenswrapper[26425]: I0217 15:34:44.600558 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3109bbc1-f0f3-4d9a-8438-61ebf59b402b" (UID: "3109bbc1-f0f3-4d9a-8438-61ebf59b402b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:44.700279 master-0 kubenswrapper[26425]: I0217 15:34:44.700160 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3109bbc1-f0f3-4d9a-8438-61ebf59b402b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:45.074300 master-0 kubenswrapper[26425]: I0217 15:34:45.074099 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"3109bbc1-f0f3-4d9a-8438-61ebf59b402b","Type":"ContainerDied","Data":"845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6"} Feb 17 15:34:45.074300 master-0 kubenswrapper[26425]: I0217 15:34:45.074162 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6" Feb 17 15:34:45.074300 master-0 kubenswrapper[26425]: I0217 15:34:45.074217 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 17 15:34:45.774199 master-0 kubenswrapper[26425]: I0217 15:34:45.774138 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:34:45.829774 master-0 kubenswrapper[26425]: I0217 15:34:45.829638 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:34:46.127139 master-0 kubenswrapper[26425]: I0217 15:34:46.127101 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 15:34:49.726843 master-0 kubenswrapper[26425]: E0217 15:34:49.726698 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-poda1e08089_c2b7_40db_91c7_9bec122b227e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod7ada525a_db93_45c8_bd0b_985245018f61.slice/crio-ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3109bbc1_f0f3_4d9a_8438_61ebf59b402b.slice/crio-conmon-46fb529ffb8fb29babb31d5b8fd8c50ccbb69d0ad39c3fc9027ef2dd0962d205.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod7ada525a_db93_45c8_bd0b_985245018f61.slice/crio-conmon-ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-conmon-f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3109bbc1_f0f3_4d9a_8438_61ebf59b402b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-poda1e08089_c2b7_40db_91c7_9bec122b227e.slice/crio-7007b4f00f07475f9d3ca30ce92bbcbb45c7137b38d528589218abfe7b28d698\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-conmon-2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-poda1e08089_c2b7_40db_91c7_9bec122b227e.slice/crio-38692b28783da5faa3440729f9a9564a0e14f831b28788b7cc1c3bb0cf87edcb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-conmon-a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-f4451646ea134e0528ffd06753e38b0852a747e94c43489277e3649ac76a1cbd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-c5835c841de8851cc594c071b21f8e95885283a9272de7eff7fcffb6067e8c9a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-conmon-83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3109bbc1_f0f3_4d9a_8438_61ebf59b402b.slice/crio-845b99796c5f85574a54e549d15a638e0c92173cde3cba13a0bc4b76837458a6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-conmon-921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3109bbc1_f0f3_4d9a_8438_61ebf59b402b.slice/crio-46fb529ffb8fb29babb31d5b8fd8c50ccbb69d0ad39c3fc9027ef2dd0962d205.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-poda1e08089_c2b7_40db_91c7_9bec122b227e.slice/crio-conmon-38692b28783da5faa3440729f9a9564a0e14f831b28788b7cc1c3bb0cf87edcb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-conmon-ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22abb517ba13d9db4b0c15e80ada3fe.slice/crio-conmon-a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:34:50.028477 master-0 kubenswrapper[26425]: I0217 15:34:50.028401 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_7ada525a-db93-45c8-bd0b-985245018f61/installer/0.log" Feb 17 15:34:50.028676 master-0 kubenswrapper[26425]: I0217 15:34:50.028491 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:50.117015 master-0 kubenswrapper[26425]: I0217 15:34:50.116913 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access\") pod \"7ada525a-db93-45c8-bd0b-985245018f61\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " Feb 17 15:34:50.117319 master-0 kubenswrapper[26425]: I0217 15:34:50.117133 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir\") pod \"7ada525a-db93-45c8-bd0b-985245018f61\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " Feb 17 15:34:50.117319 master-0 kubenswrapper[26425]: I0217 15:34:50.117235 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock\") pod \"7ada525a-db93-45c8-bd0b-985245018f61\" (UID: \"7ada525a-db93-45c8-bd0b-985245018f61\") " Feb 17 15:34:50.117574 master-0 kubenswrapper[26425]: I0217 15:34:50.117359 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ada525a-db93-45c8-bd0b-985245018f61" (UID: "7ada525a-db93-45c8-bd0b-985245018f61"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:50.117574 master-0 kubenswrapper[26425]: I0217 15:34:50.117411 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock" (OuterVolumeSpecName: "var-lock") pod "7ada525a-db93-45c8-bd0b-985245018f61" (UID: "7ada525a-db93-45c8-bd0b-985245018f61"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:34:50.117798 master-0 kubenswrapper[26425]: I0217 15:34:50.117778 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:50.117899 master-0 kubenswrapper[26425]: I0217 15:34:50.117805 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ada525a-db93-45c8-bd0b-985245018f61-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:50.122299 master-0 kubenswrapper[26425]: I0217 15:34:50.121987 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ada525a-db93-45c8-bd0b-985245018f61" (UID: "7ada525a-db93-45c8-bd0b-985245018f61"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:34:50.124344 master-0 kubenswrapper[26425]: I0217 15:34:50.124288 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_7ada525a-db93-45c8-bd0b-985245018f61/installer/0.log" Feb 17 15:34:50.124551 master-0 kubenswrapper[26425]: I0217 15:34:50.124354 26425 generic.go:334] "Generic (PLEG): container finished" podID="7ada525a-db93-45c8-bd0b-985245018f61" containerID="ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61" exitCode=1 Feb 17 15:34:50.124551 master-0 kubenswrapper[26425]: I0217 15:34:50.124385 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7ada525a-db93-45c8-bd0b-985245018f61","Type":"ContainerDied","Data":"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61"} Feb 17 15:34:50.124551 master-0 kubenswrapper[26425]: I0217 15:34:50.124422 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7ada525a-db93-45c8-bd0b-985245018f61","Type":"ContainerDied","Data":"55a84038a82cbab0c55e1feb470b790a98a80f860721546bbeefe21b347e67cb"} Feb 17 15:34:50.124551 master-0 kubenswrapper[26425]: I0217 15:34:50.124443 26425 scope.go:117] "RemoveContainer" containerID="ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61" Feb 17 15:34:50.124551 master-0 kubenswrapper[26425]: I0217 15:34:50.124522 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 17 15:34:50.157043 master-0 kubenswrapper[26425]: I0217 15:34:50.156978 26425 scope.go:117] "RemoveContainer" containerID="ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61" Feb 17 15:34:50.157686 master-0 kubenswrapper[26425]: E0217 15:34:50.157608 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61\": container with ID starting with ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61 not found: ID does not exist" containerID="ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61" Feb 17 15:34:50.157686 master-0 kubenswrapper[26425]: I0217 15:34:50.157671 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61"} err="failed to get container status \"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61\": rpc error: code = NotFound desc = could not find container \"ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61\": container with ID starting with ade762fc9a810512927ba55a77df230ca252979c608895e09d40719c3c81fb61 not found: ID does not exist" Feb 17 15:34:50.187570 master-0 kubenswrapper[26425]: I0217 15:34:50.187496 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:50.199303 master-0 kubenswrapper[26425]: I0217 15:34:50.199239 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 17 15:34:50.219429 master-0 kubenswrapper[26425]: I0217 15:34:50.219269 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ada525a-db93-45c8-bd0b-985245018f61-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:34:50.305670 master-0 kubenswrapper[26425]: I0217 15:34:50.305577 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:34:50.305670 master-0 kubenswrapper[26425]: I0217 15:34:50.305655 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:34:50.410503 master-0 kubenswrapper[26425]: I0217 15:34:50.410386 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ada525a-db93-45c8-bd0b-985245018f61" path="/var/lib/kubelet/pods/7ada525a-db93-45c8-bd0b-985245018f61/volumes" Feb 17 15:34:51.395137 master-0 kubenswrapper[26425]: I0217 15:34:51.395058 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:51.431518 master-0 kubenswrapper[26425]: I0217 15:34:51.431417 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="3eed3544-3d2b-49d2-8e48-76abbda73b93" Feb 17 15:34:51.431518 master-0 kubenswrapper[26425]: I0217 15:34:51.431505 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="3eed3544-3d2b-49d2-8e48-76abbda73b93" Feb 17 15:34:51.452961 master-0 kubenswrapper[26425]: I0217 15:34:51.452885 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:51.459353 master-0 kubenswrapper[26425]: I0217 15:34:51.459249 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:34:51.472976 master-0 kubenswrapper[26425]: I0217 15:34:51.472915 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:51.473716 master-0 kubenswrapper[26425]: I0217 15:34:51.473612 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:34:51.485347 master-0 kubenswrapper[26425]: I0217 15:34:51.485279 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 17 15:34:51.508095 master-0 kubenswrapper[26425]: W0217 15:34:51.507812 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18675e97311741112924c894ff03f2b2.slice/crio-df815fee1fdec12e9e509155573ea5fdc5da524775ca30659563c217def54855 WatchSource:0}: Error finding container df815fee1fdec12e9e509155573ea5fdc5da524775ca30659563c217def54855: Status 404 returned error can't find the container with id df815fee1fdec12e9e509155573ea5fdc5da524775ca30659563c217def54855 Feb 17 15:34:52.152224 master-0 kubenswrapper[26425]: I0217 15:34:52.152181 26425 generic.go:334] "Generic (PLEG): container finished" podID="18675e97311741112924c894ff03f2b2" containerID="154a62ba8556599985b0023ceadaa75f0bafe7d5744d7c5ab46fe8c1e5a556a5" exitCode=0 Feb 17 15:34:52.152488 master-0 kubenswrapper[26425]: I0217 15:34:52.152255 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"18675e97311741112924c894ff03f2b2","Type":"ContainerDied","Data":"154a62ba8556599985b0023ceadaa75f0bafe7d5744d7c5ab46fe8c1e5a556a5"} Feb 17 15:34:52.152592 master-0 kubenswrapper[26425]: I0217 15:34:52.152574 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"18675e97311741112924c894ff03f2b2","Type":"ContainerStarted","Data":"df815fee1fdec12e9e509155573ea5fdc5da524775ca30659563c217def54855"} Feb 17 15:34:53.163138 master-0 kubenswrapper[26425]: I0217 15:34:53.163076 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"18675e97311741112924c894ff03f2b2","Type":"ContainerStarted","Data":"4b82c4ca69db1f0b25aca5b4396cb6041aa25649b7ae1aeec98e35257381c358"} Feb 17 15:34:53.163138 master-0 kubenswrapper[26425]: I0217 15:34:53.163129 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"18675e97311741112924c894ff03f2b2","Type":"ContainerStarted","Data":"8561d42b26d597aedeba215fc4ab904da9d4d06b603150dd78a820e35a182f48"} Feb 17 15:34:53.163138 master-0 kubenswrapper[26425]: I0217 15:34:53.163143 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"18675e97311741112924c894ff03f2b2","Type":"ContainerStarted","Data":"ce221514dc332caead3f14bd496fafce275f41dc71f210c4e32da0c03299ffe1"} Feb 17 15:34:53.163792 master-0 kubenswrapper[26425]: I0217 15:34:53.163765 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:34:53.189350 master-0 kubenswrapper[26425]: I0217 15:34:53.189277 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.189259254 podStartE2EDuration="2.189259254s" podCreationTimestamp="2026-02-17 15:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:53.184971032 +0000 UTC m=+1155.076694860" watchObservedRunningTime="2026-02-17 15:34:53.189259254 +0000 UTC m=+1155.080983072" Feb 17 15:34:54.423664 master-0 kubenswrapper[26425]: I0217 15:34:54.423547 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:34:54.423664 master-0 kubenswrapper[26425]: I0217 15:34:54.423653 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:34:57.395133 master-0 kubenswrapper[26425]: I0217 15:34:57.395033 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:57.430061 master-0 kubenswrapper[26425]: I0217 15:34:57.429978 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5405abb4-957b-4028-b760-ad0e0f5d6110" Feb 17 15:34:57.430061 master-0 kubenswrapper[26425]: I0217 15:34:57.430044 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5405abb4-957b-4028-b760-ad0e0f5d6110" Feb 17 15:34:57.455776 master-0 kubenswrapper[26425]: I0217 15:34:57.455576 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:57.457770 master-0 kubenswrapper[26425]: I0217 15:34:57.457712 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:34:57.470177 master-0 kubenswrapper[26425]: I0217 15:34:57.469935 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:34:57.475895 master-0 kubenswrapper[26425]: I0217 15:34:57.475848 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:34:57.486921 master-0 kubenswrapper[26425]: I0217 15:34:57.486849 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:34:57.509998 master-0 kubenswrapper[26425]: W0217 15:34:57.509910 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaff449c5bcc0e8cb13ed26ccbcdd311.slice/crio-7b777f11905a08a05c666a26ddcdbc52049719c010842fcb74dcbf097b130693 WatchSource:0}: Error finding container 7b777f11905a08a05c666a26ddcdbc52049719c010842fcb74dcbf097b130693: Status 404 returned error can't find the container with id 7b777f11905a08a05c666a26ddcdbc52049719c010842fcb74dcbf097b130693 Feb 17 15:34:58.216043 master-0 kubenswrapper[26425]: I0217 15:34:58.215956 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"57141bfc1a0a1d8e52afad3e9b378c7a4dd9c37db878ece93dd489f7a847dcce"} Feb 17 15:34:58.216043 master-0 kubenswrapper[26425]: I0217 15:34:58.216008 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"7b777f11905a08a05c666a26ddcdbc52049719c010842fcb74dcbf097b130693"} Feb 17 15:34:59.229477 master-0 kubenswrapper[26425]: I0217 15:34:59.229372 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"d7c12fb1b92d28ef7ba81926d7b090d49d50669135d83d19da43eab3563fbe49"} Feb 17 15:34:59.229477 master-0 kubenswrapper[26425]: I0217 15:34:59.229433 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"e9aecde5e6438f850dbad5ae273e3c99bc8982f855499ceec4aa52f9bb199b51"} Feb 17 15:34:59.229477 master-0 kubenswrapper[26425]: I0217 15:34:59.229447 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"7bd7a427fdfea568f9e25f8ac1dfa94717d2fe4a7b16f61327856994d3fecf37"} Feb 17 15:34:59.266152 master-0 kubenswrapper[26425]: I0217 15:34:59.266045 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.266015573 podStartE2EDuration="2.266015573s" podCreationTimestamp="2026-02-17 15:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:34:59.255263105 +0000 UTC m=+1161.146986953" watchObservedRunningTime="2026-02-17 15:34:59.266015573 +0000 UTC m=+1161.157739421" Feb 17 15:35:00.306286 master-0 kubenswrapper[26425]: I0217 15:35:00.306176 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:00.306286 master-0 kubenswrapper[26425]: I0217 15:35:00.306276 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:04.424031 master-0 kubenswrapper[26425]: I0217 15:35:04.423925 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:04.424031 master-0 kubenswrapper[26425]: I0217 15:35:04.424030 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:07.476954 master-0 kubenswrapper[26425]: I0217 15:35:07.476866 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:07.476954 master-0 kubenswrapper[26425]: I0217 15:35:07.476916 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:07.478126 master-0 kubenswrapper[26425]: I0217 15:35:07.477029 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:07.478126 master-0 kubenswrapper[26425]: I0217 15:35:07.477487 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:07.484371 master-0 kubenswrapper[26425]: I0217 15:35:07.484303 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:07.485063 master-0 kubenswrapper[26425]: I0217 15:35:07.485016 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:08.331944 master-0 kubenswrapper[26425]: I0217 15:35:08.331860 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:08.333862 master-0 kubenswrapper[26425]: I0217 15:35:08.333788 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:10.306841 master-0 kubenswrapper[26425]: I0217 15:35:10.306731 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:10.307985 master-0 kubenswrapper[26425]: I0217 15:35:10.306865 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:14.424223 master-0 kubenswrapper[26425]: I0217 15:35:14.424149 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:14.427356 master-0 kubenswrapper[26425]: I0217 15:35:14.424235 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:20.306566 master-0 kubenswrapper[26425]: I0217 15:35:20.306440 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:20.307518 master-0 kubenswrapper[26425]: I0217 15:35:20.306584 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:24.423871 master-0 kubenswrapper[26425]: I0217 15:35:24.423522 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:24.423871 master-0 kubenswrapper[26425]: I0217 15:35:24.423621 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:30.306504 master-0 kubenswrapper[26425]: I0217 15:35:30.306382 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:30.307835 master-0 kubenswrapper[26425]: I0217 15:35:30.306588 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:34.423632 master-0 kubenswrapper[26425]: I0217 15:35:34.423552 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:34.424588 master-0 kubenswrapper[26425]: I0217 15:35:34.423640 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:35.292923 master-0 kubenswrapper[26425]: I0217 15:35:35.292826 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:35:35.293421 master-0 kubenswrapper[26425]: E0217 15:35:35.293382 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ada525a-db93-45c8-bd0b-985245018f61" containerName="installer" Feb 17 15:35:35.293624 master-0 kubenswrapper[26425]: I0217 15:35:35.293425 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ada525a-db93-45c8-bd0b-985245018f61" containerName="installer" Feb 17 15:35:35.293624 master-0 kubenswrapper[26425]: E0217 15:35:35.293448 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3109bbc1-f0f3-4d9a-8438-61ebf59b402b" containerName="installer" Feb 17 15:35:35.293624 master-0 kubenswrapper[26425]: I0217 15:35:35.293500 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3109bbc1-f0f3-4d9a-8438-61ebf59b402b" containerName="installer" Feb 17 15:35:35.294099 master-0 kubenswrapper[26425]: I0217 15:35:35.293848 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3109bbc1-f0f3-4d9a-8438-61ebf59b402b" containerName="installer" Feb 17 15:35:35.294099 master-0 kubenswrapper[26425]: I0217 15:35:35.293884 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ada525a-db93-45c8-bd0b-985245018f61" containerName="installer" Feb 17 15:35:35.294894 master-0 kubenswrapper[26425]: I0217 15:35:35.294840 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.297351 master-0 kubenswrapper[26425]: I0217 15:35:35.297285 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:35:35.298771 master-0 kubenswrapper[26425]: I0217 15:35:35.298709 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:35:35.303146 master-0 kubenswrapper[26425]: I0217 15:35:35.303070 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" containerID="cri-o://e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d" gracePeriod=15 Feb 17 15:35:35.303279 master-0 kubenswrapper[26425]: I0217 15:35:35.303154 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f" gracePeriod=15 Feb 17 15:35:35.303536 master-0 kubenswrapper[26425]: I0217 15:35:35.303196 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb" gracePeriod=15 Feb 17 15:35:35.303536 master-0 kubenswrapper[26425]: I0217 15:35:35.303297 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" containerID="cri-o://76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a" gracePeriod=15 Feb 17 15:35:35.303536 master-0 kubenswrapper[26425]: I0217 15:35:35.303409 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" containerID="cri-o://980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b" gracePeriod=15 Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: E0217 15:35:35.303592 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: I0217 15:35:35.303626 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: E0217 15:35:35.303663 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: I0217 15:35:35.303677 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: E0217 15:35:35.303714 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: I0217 15:35:35.303728 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 17 15:35:35.303754 master-0 kubenswrapper[26425]: E0217 15:35:35.303759 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="setup" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.303772 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="setup" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: E0217 15:35:35.303798 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.303811 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: E0217 15:35:35.303839 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.303851 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.304076 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.304106 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.304150 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.304170 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 17 15:35:35.305266 master-0 kubenswrapper[26425]: I0217 15:35:35.304192 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 17 15:35:35.391889 master-0 kubenswrapper[26425]: I0217 15:35:35.391821 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.391897 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.391927 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.391968 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.391987 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.392013 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.392047 master-0 kubenswrapper[26425]: I0217 15:35:35.392050 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.392566 master-0 kubenswrapper[26425]: I0217 15:35:35.392090 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.493761 master-0 kubenswrapper[26425]: I0217 15:35:35.493629 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.493761 master-0 kubenswrapper[26425]: I0217 15:35:35.493723 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.493761 master-0 kubenswrapper[26425]: I0217 15:35:35.493757 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493855 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493903 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493931 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493925 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493936 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.493972 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494003 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494049 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494080 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494163 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494191 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494198 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.495063 master-0 kubenswrapper[26425]: I0217 15:35:35.494229 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:35.606296 master-0 kubenswrapper[26425]: I0217 15:35:35.606104 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/0.log" Feb 17 15:35:35.607264 master-0 kubenswrapper[26425]: I0217 15:35:35.607181 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b" exitCode=0 Feb 17 15:35:35.607264 master-0 kubenswrapper[26425]: I0217 15:35:35.607250 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f" exitCode=0 Feb 17 15:35:35.607264 master-0 kubenswrapper[26425]: I0217 15:35:35.607265 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb" exitCode=0 Feb 17 15:35:35.607538 master-0 kubenswrapper[26425]: I0217 15:35:35.607278 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a" exitCode=2 Feb 17 15:35:37.678411 master-0 kubenswrapper[26425]: E0217 15:35:37.678349 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e298020284b0e8ffa6a0bc184059d9.slice/crio-conmon-e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e298020284b0e8ffa6a0bc184059d9.slice/crio-e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:35:37.790663 master-0 kubenswrapper[26425]: I0217 15:35:37.790610 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/0.log" Feb 17 15:35:37.791922 master-0 kubenswrapper[26425]: I0217 15:35:37.791869 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:37.793058 master-0 kubenswrapper[26425]: I0217 15:35:37.792979 26425 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:37.947708 master-0 kubenswrapper[26425]: I0217 15:35:37.947521 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.947727 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.947782 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.947869 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.947878 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.948054 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.948433 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.948519 26425 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:37.948638 master-0 kubenswrapper[26425]: I0217 15:35:37.948538 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:38.408782 master-0 kubenswrapper[26425]: I0217 15:35:38.406968 26425 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:38.412645 master-0 kubenswrapper[26425]: I0217 15:35:38.412581 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e298020284b0e8ffa6a0bc184059d9" path="/var/lib/kubelet/pods/10e298020284b0e8ffa6a0bc184059d9/volumes" Feb 17 15:35:38.647602 master-0 kubenswrapper[26425]: I0217 15:35:38.647556 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/0.log" Feb 17 15:35:38.649132 master-0 kubenswrapper[26425]: I0217 15:35:38.649015 26425 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d" exitCode=0 Feb 17 15:35:38.649132 master-0 kubenswrapper[26425]: I0217 15:35:38.649130 26425 scope.go:117] "RemoveContainer" containerID="980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b" Feb 17 15:35:38.649432 master-0 kubenswrapper[26425]: I0217 15:35:38.649391 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:38.651447 master-0 kubenswrapper[26425]: I0217 15:35:38.650668 26425 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:38.657426 master-0 kubenswrapper[26425]: I0217 15:35:38.657354 26425 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:38.684015 master-0 kubenswrapper[26425]: I0217 15:35:38.683978 26425 scope.go:117] "RemoveContainer" containerID="b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f" Feb 17 15:35:38.711489 master-0 kubenswrapper[26425]: I0217 15:35:38.711396 26425 scope.go:117] "RemoveContainer" containerID="d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb" Feb 17 15:35:38.739554 master-0 kubenswrapper[26425]: I0217 15:35:38.739377 26425 scope.go:117] "RemoveContainer" containerID="76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a" Feb 17 15:35:38.780327 master-0 kubenswrapper[26425]: I0217 15:35:38.780260 26425 scope.go:117] "RemoveContainer" containerID="e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d" Feb 17 15:35:38.801790 master-0 kubenswrapper[26425]: I0217 15:35:38.801380 26425 scope.go:117] "RemoveContainer" containerID="8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd" Feb 17 15:35:38.838106 master-0 kubenswrapper[26425]: I0217 15:35:38.838021 26425 scope.go:117] "RemoveContainer" containerID="980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b" Feb 17 15:35:38.838860 master-0 kubenswrapper[26425]: E0217 15:35:38.838798 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b\": container with ID starting with 980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b not found: ID does not exist" containerID="980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b" Feb 17 15:35:38.838945 master-0 kubenswrapper[26425]: I0217 15:35:38.838875 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b"} err="failed to get container status \"980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b\": rpc error: code = NotFound desc = could not find container \"980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b\": container with ID starting with 980a4781c30d3d1cd924fe8657c728215029ca0b12e5156e2bc1d98aaa22a49b not found: ID does not exist" Feb 17 15:35:38.838945 master-0 kubenswrapper[26425]: I0217 15:35:38.838919 26425 scope.go:117] "RemoveContainer" containerID="b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f" Feb 17 15:35:38.839522 master-0 kubenswrapper[26425]: E0217 15:35:38.839427 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f\": container with ID starting with b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f not found: ID does not exist" containerID="b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f" Feb 17 15:35:38.839522 master-0 kubenswrapper[26425]: I0217 15:35:38.839499 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f"} err="failed to get container status \"b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f\": rpc error: code = NotFound desc = could not find container \"b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f\": container with ID starting with b5d5696dd1897e54944d344a621406afc619c583887362dc4a155d478636777f not found: ID does not exist" Feb 17 15:35:38.840003 master-0 kubenswrapper[26425]: I0217 15:35:38.839533 26425 scope.go:117] "RemoveContainer" containerID="d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb" Feb 17 15:35:38.840916 master-0 kubenswrapper[26425]: E0217 15:35:38.840879 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb\": container with ID starting with d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb not found: ID does not exist" containerID="d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb" Feb 17 15:35:38.841097 master-0 kubenswrapper[26425]: I0217 15:35:38.841063 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb"} err="failed to get container status \"d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb\": rpc error: code = NotFound desc = could not find container \"d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb\": container with ID starting with d43202e9521a0c0d08be81fff34aab56f078dff691bc6f9a47112bcf98619bdb not found: ID does not exist" Feb 17 15:35:38.841185 master-0 kubenswrapper[26425]: I0217 15:35:38.841170 26425 scope.go:117] "RemoveContainer" containerID="76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a" Feb 17 15:35:38.841910 master-0 kubenswrapper[26425]: E0217 15:35:38.841859 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a\": container with ID starting with 76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a not found: ID does not exist" containerID="76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a" Feb 17 15:35:38.842079 master-0 kubenswrapper[26425]: I0217 15:35:38.841921 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a"} err="failed to get container status \"76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a\": rpc error: code = NotFound desc = could not find container \"76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a\": container with ID starting with 76e0fed49ae43713de3841f5293cc58d4c348cf94b6b8fec8752cf45315d468a not found: ID does not exist" Feb 17 15:35:38.842079 master-0 kubenswrapper[26425]: I0217 15:35:38.841953 26425 scope.go:117] "RemoveContainer" containerID="e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d" Feb 17 15:35:38.842331 master-0 kubenswrapper[26425]: E0217 15:35:38.842294 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d\": container with ID starting with e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d not found: ID does not exist" containerID="e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d" Feb 17 15:35:38.842421 master-0 kubenswrapper[26425]: I0217 15:35:38.842339 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d"} err="failed to get container status \"e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d\": rpc error: code = NotFound desc = could not find container \"e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d\": container with ID starting with e896f0cd2ed3d0e86ed77cce11c1033db71e0bf17ad78817b504fa3ddcb04f6d not found: ID does not exist" Feb 17 15:35:38.842421 master-0 kubenswrapper[26425]: I0217 15:35:38.842372 26425 scope.go:117] "RemoveContainer" containerID="8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd" Feb 17 15:35:38.842722 master-0 kubenswrapper[26425]: E0217 15:35:38.842697 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd\": container with ID starting with 8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd not found: ID does not exist" containerID="8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd" Feb 17 15:35:38.842722 master-0 kubenswrapper[26425]: I0217 15:35:38.842720 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd"} err="failed to get container status \"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd\": rpc error: code = NotFound desc = could not find container \"8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd\": container with ID starting with 8e8c3e75d705a4556742a2ccd4c9c153d9b87df6d2bac06c76299dd80b3e66cd not found: ID does not exist" Feb 17 15:35:40.306253 master-0 kubenswrapper[26425]: I0217 15:35:40.306196 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:40.307136 master-0 kubenswrapper[26425]: I0217 15:35:40.306272 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:40.396980 master-0 kubenswrapper[26425]: E0217 15:35:40.396834 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:40.397721 master-0 kubenswrapper[26425]: I0217 15:35:40.397657 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:40.438864 master-0 kubenswrapper[26425]: W0217 15:35:40.438787 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9a2b3a37af32e5d570b82bfd956f250.slice/crio-8b519e0063ac7383efbea75ca83a56453c2c00c70dd69209da641ec9eb19702b WatchSource:0}: Error finding container 8b519e0063ac7383efbea75ca83a56453c2c00c70dd69209da641ec9eb19702b: Status 404 returned error can't find the container with id 8b519e0063ac7383efbea75ca83a56453c2c00c70dd69209da641ec9eb19702b Feb 17 15:35:40.443889 master-0 kubenswrapper[26425]: E0217 15:35:40.443666 26425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189512a7d292c6f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a9a2b3a37af32e5d570b82bfd956f250,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:35:40.44183116 +0000 UTC m=+1202.333555008,LastTimestamp:2026-02-17 15:35:40.44183116 +0000 UTC m=+1202.333555008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:35:40.481359 master-0 kubenswrapper[26425]: I0217 15:35:40.481282 26425 scope.go:117] "RemoveContainer" containerID="921f7978b36344d181f60d972f8df809901542b7b9ed6db91856803fe316a449" Feb 17 15:35:40.647566 master-0 kubenswrapper[26425]: I0217 15:35:40.647537 26425 scope.go:117] "RemoveContainer" containerID="ae582cbd98ce8c9218d682341ba37ebf3194e1792a8c40deb902fb2cc032961b" Feb 17 15:35:40.670273 master-0 kubenswrapper[26425]: I0217 15:35:40.670226 26425 scope.go:117] "RemoveContainer" containerID="091e8f02d5aa015a7796a6787006d66729863d826124745811b4e05f467eb821" Feb 17 15:35:40.672895 master-0 kubenswrapper[26425]: I0217 15:35:40.672865 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_952766c3a88fd12345a552f1277199f9/kube-scheduler-cert-syncer/0.log" Feb 17 15:35:40.675903 master-0 kubenswrapper[26425]: I0217 15:35:40.675871 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"8b519e0063ac7383efbea75ca83a56453c2c00c70dd69209da641ec9eb19702b"} Feb 17 15:35:40.677587 master-0 kubenswrapper[26425]: I0217 15:35:40.677554 26425 generic.go:334] "Generic (PLEG): container finished" podID="a34b86e7-e7af-492c-86d6-95fc9155d958" containerID="317109f7b69d5435c410ad9bff4b0cfd044f78c87fa10d0cd8df62649fb6d9f4" exitCode=0 Feb 17 15:35:40.677587 master-0 kubenswrapper[26425]: I0217 15:35:40.677582 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"a34b86e7-e7af-492c-86d6-95fc9155d958","Type":"ContainerDied","Data":"317109f7b69d5435c410ad9bff4b0cfd044f78c87fa10d0cd8df62649fb6d9f4"} Feb 17 15:35:40.678494 master-0 kubenswrapper[26425]: I0217 15:35:40.678424 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:40.699885 master-0 kubenswrapper[26425]: I0217 15:35:40.699844 26425 scope.go:117] "RemoveContainer" containerID="f916d77fcaa30da997b385ef7ac42b673154c0b050a34bbee0b669498d494e0d" Feb 17 15:35:41.484350 master-0 kubenswrapper[26425]: I0217 15:35:41.484270 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 17 15:35:41.487024 master-0 kubenswrapper[26425]: I0217 15:35:41.486482 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:41.488091 master-0 kubenswrapper[26425]: I0217 15:35:41.488012 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:41.691866 master-0 kubenswrapper[26425]: I0217 15:35:41.691757 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"57f48d420864783db4edfc9ba02b2310d3831fce9444e0d9d3ef25b5546d0f41"} Feb 17 15:35:41.693565 master-0 kubenswrapper[26425]: I0217 15:35:41.693509 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:41.693664 master-0 kubenswrapper[26425]: E0217 15:35:41.693587 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:41.694519 master-0 kubenswrapper[26425]: I0217 15:35:41.694367 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:42.167005 master-0 kubenswrapper[26425]: I0217 15:35:42.166970 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:35:42.168245 master-0 kubenswrapper[26425]: I0217 15:35:42.168209 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:42.169577 master-0 kubenswrapper[26425]: I0217 15:35:42.169428 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:42.243619 master-0 kubenswrapper[26425]: I0217 15:35:42.243551 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access\") pod \"a34b86e7-e7af-492c-86d6-95fc9155d958\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " Feb 17 15:35:42.243619 master-0 kubenswrapper[26425]: I0217 15:35:42.243628 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir\") pod \"a34b86e7-e7af-492c-86d6-95fc9155d958\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " Feb 17 15:35:42.243891 master-0 kubenswrapper[26425]: I0217 15:35:42.243757 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock\") pod \"a34b86e7-e7af-492c-86d6-95fc9155d958\" (UID: \"a34b86e7-e7af-492c-86d6-95fc9155d958\") " Feb 17 15:35:42.243891 master-0 kubenswrapper[26425]: I0217 15:35:42.243840 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a34b86e7-e7af-492c-86d6-95fc9155d958" (UID: "a34b86e7-e7af-492c-86d6-95fc9155d958"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:35:42.244071 master-0 kubenswrapper[26425]: I0217 15:35:42.243993 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock" (OuterVolumeSpecName: "var-lock") pod "a34b86e7-e7af-492c-86d6-95fc9155d958" (UID: "a34b86e7-e7af-492c-86d6-95fc9155d958"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:35:42.244508 master-0 kubenswrapper[26425]: I0217 15:35:42.244477 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:42.244595 master-0 kubenswrapper[26425]: I0217 15:35:42.244512 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a34b86e7-e7af-492c-86d6-95fc9155d958-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:42.251047 master-0 kubenswrapper[26425]: I0217 15:35:42.250989 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a34b86e7-e7af-492c-86d6-95fc9155d958" (UID: "a34b86e7-e7af-492c-86d6-95fc9155d958"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:35:42.346835 master-0 kubenswrapper[26425]: I0217 15:35:42.346698 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a34b86e7-e7af-492c-86d6-95fc9155d958-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:35:42.705504 master-0 kubenswrapper[26425]: I0217 15:35:42.705358 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"a34b86e7-e7af-492c-86d6-95fc9155d958","Type":"ContainerDied","Data":"1312f80b907b6a6578225b78957503b5e0d262b74c08ff0c26d3c261eb860767"} Feb 17 15:35:42.705504 master-0 kubenswrapper[26425]: I0217 15:35:42.705447 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1312f80b907b6a6578225b78957503b5e0d262b74c08ff0c26d3c261eb860767" Feb 17 15:35:42.706591 master-0 kubenswrapper[26425]: I0217 15:35:42.705695 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 17 15:35:42.708345 master-0 kubenswrapper[26425]: E0217 15:35:42.708165 26425 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:35:42.713972 master-0 kubenswrapper[26425]: I0217 15:35:42.713896 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:42.715220 master-0 kubenswrapper[26425]: I0217 15:35:42.714872 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.423682 master-0 kubenswrapper[26425]: I0217 15:35:44.423592 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:44.424545 master-0 kubenswrapper[26425]: I0217 15:35:44.423692 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:44.515317 master-0 kubenswrapper[26425]: E0217 15:35:44.515240 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.516343 master-0 kubenswrapper[26425]: E0217 15:35:44.516273 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.518798 master-0 kubenswrapper[26425]: E0217 15:35:44.518505 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.520159 master-0 kubenswrapper[26425]: E0217 15:35:44.520091 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.521038 master-0 kubenswrapper[26425]: E0217 15:35:44.520968 26425 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:44.521189 master-0 kubenswrapper[26425]: I0217 15:35:44.521026 26425 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:35:44.522560 master-0 kubenswrapper[26425]: E0217 15:35:44.522500 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 17 15:35:44.723873 master-0 kubenswrapper[26425]: E0217 15:35:44.723693 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 17 15:35:45.124664 master-0 kubenswrapper[26425]: E0217 15:35:45.124607 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 17 15:35:45.926688 master-0 kubenswrapper[26425]: E0217 15:35:45.926595 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 17 15:35:45.995800 master-0 kubenswrapper[26425]: E0217 15:35:45.995572 26425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189512a7d292c6f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a9a2b3a37af32e5d570b82bfd956f250,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-17 15:35:40.44183116 +0000 UTC m=+1202.333555008,LastTimestamp:2026-02-17 15:35:40.44183116 +0000 UTC m=+1202.333555008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 17 15:35:47.528603 master-0 kubenswrapper[26425]: E0217 15:35:47.528414 26425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 17 15:35:48.395422 master-0 kubenswrapper[26425]: I0217 15:35:48.395334 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:48.405562 master-0 kubenswrapper[26425]: I0217 15:35:48.405440 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.406575 master-0 kubenswrapper[26425]: I0217 15:35:48.406496 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.407594 master-0 kubenswrapper[26425]: I0217 15:35:48.407526 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.410162 master-0 kubenswrapper[26425]: I0217 15:35:48.410088 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.436938 master-0 kubenswrapper[26425]: I0217 15:35:48.436863 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:48.436938 master-0 kubenswrapper[26425]: I0217 15:35:48.436923 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:48.438243 master-0 kubenswrapper[26425]: E0217 15:35:48.438151 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:48.439122 master-0 kubenswrapper[26425]: I0217 15:35:48.439068 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:48.478067 master-0 kubenswrapper[26425]: W0217 15:35:48.477991 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa8ee25cec0b37c40dad37c52b89d42.slice/crio-2b40c6438ab1fd9947c5b7099dec81d47823e31ccdb7d3c6522c904177c07b19 WatchSource:0}: Error finding container 2b40c6438ab1fd9947c5b7099dec81d47823e31ccdb7d3c6522c904177c07b19: Status 404 returned error can't find the container with id 2b40c6438ab1fd9947c5b7099dec81d47823e31ccdb7d3c6522c904177c07b19 Feb 17 15:35:48.769631 master-0 kubenswrapper[26425]: I0217 15:35:48.769550 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"2b40c6438ab1fd9947c5b7099dec81d47823e31ccdb7d3c6522c904177c07b19"} Feb 17 15:35:48.775432 master-0 kubenswrapper[26425]: I0217 15:35:48.775390 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager/0.log" Feb 17 15:35:48.775568 master-0 kubenswrapper[26425]: I0217 15:35:48.775529 26425 generic.go:334] "Generic (PLEG): container finished" podID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerID="57141bfc1a0a1d8e52afad3e9b378c7a4dd9c37db878ece93dd489f7a847dcce" exitCode=1 Feb 17 15:35:48.775646 master-0 kubenswrapper[26425]: I0217 15:35:48.775590 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerDied","Data":"57141bfc1a0a1d8e52afad3e9b378c7a4dd9c37db878ece93dd489f7a847dcce"} Feb 17 15:35:48.776743 master-0 kubenswrapper[26425]: I0217 15:35:48.776672 26425 scope.go:117] "RemoveContainer" containerID="57141bfc1a0a1d8e52afad3e9b378c7a4dd9c37db878ece93dd489f7a847dcce" Feb 17 15:35:48.777658 master-0 kubenswrapper[26425]: I0217 15:35:48.777557 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.778653 master-0 kubenswrapper[26425]: I0217 15:35:48.778617 26425 status_manager.go:851] "Failed to get status for pod" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:48.779358 master-0 kubenswrapper[26425]: I0217 15:35:48.779315 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.797423 master-0 kubenswrapper[26425]: I0217 15:35:49.797363 26425 generic.go:334] "Generic (PLEG): container finished" podID="afa8ee25cec0b37c40dad37c52b89d42" containerID="2f1305736afd649ecf38f8faf68974e326218808281f3a883d59bf7137ac8445" exitCode=0 Feb 17 15:35:49.798347 master-0 kubenswrapper[26425]: I0217 15:35:49.797485 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerDied","Data":"2f1305736afd649ecf38f8faf68974e326218808281f3a883d59bf7137ac8445"} Feb 17 15:35:49.798608 master-0 kubenswrapper[26425]: I0217 15:35:49.797931 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:49.798882 master-0 kubenswrapper[26425]: I0217 15:35:49.798822 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:49.799265 master-0 kubenswrapper[26425]: I0217 15:35:49.799200 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.800867 master-0 kubenswrapper[26425]: I0217 15:35:49.800733 26425 status_manager.go:851] "Failed to get status for pod" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.801021 master-0 kubenswrapper[26425]: E0217 15:35:49.800900 26425 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:49.803938 master-0 kubenswrapper[26425]: I0217 15:35:49.801851 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.803938 master-0 kubenswrapper[26425]: I0217 15:35:49.803856 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager/0.log" Feb 17 15:35:49.804132 master-0 kubenswrapper[26425]: I0217 15:35:49.803965 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"eaff449c5bcc0e8cb13ed26ccbcdd311","Type":"ContainerStarted","Data":"a1bf1a7e1900bf2718fe7ec35df9cdfd995d49924e5c050fc18a197ec60d89c3"} Feb 17 15:35:49.811039 master-0 kubenswrapper[26425]: I0217 15:35:49.806608 26425 status_manager.go:851] "Failed to get status for pod" podUID="18675e97311741112924c894ff03f2b2" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.811832 master-0 kubenswrapper[26425]: I0217 15:35:49.811353 26425 status_manager.go:851] "Failed to get status for pod" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:49.812641 master-0 kubenswrapper[26425]: I0217 15:35:49.812153 26425 status_manager.go:851] "Failed to get status for pod" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 17 15:35:50.306747 master-0 kubenswrapper[26425]: I0217 15:35:50.306650 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:35:50.306747 master-0 kubenswrapper[26425]: I0217 15:35:50.306733 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:35:50.821967 master-0 kubenswrapper[26425]: I0217 15:35:50.821792 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"8bbd0364541c663a4c991819796fb5ccb7471a8cda82a2def32fcf4eb79b2000"} Feb 17 15:35:50.825737 master-0 kubenswrapper[26425]: I0217 15:35:50.821983 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"9fd4f5aeb95d7b0cf8a4fa834c09a895d25b25f4607df9ac3657aaf5ef7d3948"} Feb 17 15:35:51.846887 master-0 kubenswrapper[26425]: I0217 15:35:51.846734 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"d94ae51c35f3aed23ec2befa10c4ac63f65914686d84cdf658672b5ce5ae3ea2"} Feb 17 15:35:51.846887 master-0 kubenswrapper[26425]: I0217 15:35:51.846790 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"294c39bec1605154bf01c0f2925f61424d6dcac44c68acaaa74a5192b7d350fd"} Feb 17 15:35:51.846887 master-0 kubenswrapper[26425]: I0217 15:35:51.846803 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"66ded4e26390ac82141a0fa3ad9085b4adaea77a77b02dc91086e22efb31b284"} Feb 17 15:35:51.847959 master-0 kubenswrapper[26425]: I0217 15:35:51.847860 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:51.847959 master-0 kubenswrapper[26425]: I0217 15:35:51.847892 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:51.847959 master-0 kubenswrapper[26425]: I0217 15:35:51.847926 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:53.439320 master-0 kubenswrapper[26425]: I0217 15:35:53.439225 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:53.440380 master-0 kubenswrapper[26425]: I0217 15:35:53.439848 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:53.450280 master-0 kubenswrapper[26425]: I0217 15:35:53.450224 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:54.423406 master-0 kubenswrapper[26425]: I0217 15:35:54.423298 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:35:54.423756 master-0 kubenswrapper[26425]: I0217 15:35:54.423407 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:35:57.009145 master-0 kubenswrapper[26425]: I0217 15:35:57.008525 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:57.115198 master-0 kubenswrapper[26425]: I0217 15:35:57.114825 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="2925a4f9-0ce1-4c55-b6b4-bf396e340c6a" Feb 17 15:35:57.477492 master-0 kubenswrapper[26425]: I0217 15:35:57.477393 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:57.480637 master-0 kubenswrapper[26425]: I0217 15:35:57.478135 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:57.489485 master-0 kubenswrapper[26425]: I0217 15:35:57.488567 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:35:57.908837 master-0 kubenswrapper[26425]: I0217 15:35:57.908743 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:57.910313 master-0 kubenswrapper[26425]: I0217 15:35:57.910258 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:57.918543 master-0 kubenswrapper[26425]: I0217 15:35:57.918501 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:35:58.443657 master-0 kubenswrapper[26425]: I0217 15:35:58.443576 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="2925a4f9-0ce1-4c55-b6b4-bf396e340c6a" Feb 17 15:35:58.916504 master-0 kubenswrapper[26425]: I0217 15:35:58.916381 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:58.916504 master-0 kubenswrapper[26425]: I0217 15:35:58.916431 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:35:58.920922 master-0 kubenswrapper[26425]: I0217 15:35:58.920819 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="2925a4f9-0ce1-4c55-b6b4-bf396e340c6a" Feb 17 15:36:00.305998 master-0 kubenswrapper[26425]: I0217 15:36:00.305902 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:36:00.305998 master-0 kubenswrapper[26425]: I0217 15:36:00.305998 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:36:04.424365 master-0 kubenswrapper[26425]: I0217 15:36:04.424260 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:36:04.424365 master-0 kubenswrapper[26425]: I0217 15:36:04.424346 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:36:06.571381 master-0 kubenswrapper[26425]: I0217 15:36:06.571297 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:36:06.820080 master-0 kubenswrapper[26425]: I0217 15:36:06.820015 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 15:36:07.246136 master-0 kubenswrapper[26425]: I0217 15:36:07.246033 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 17 15:36:07.304394 master-0 kubenswrapper[26425]: I0217 15:36:07.304296 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:36:07.350734 master-0 kubenswrapper[26425]: I0217 15:36:07.350678 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 15:36:07.388335 master-0 kubenswrapper[26425]: I0217 15:36:07.388250 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:36:07.449805 master-0 kubenswrapper[26425]: I0217 15:36:07.449719 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:36:07.483917 master-0 kubenswrapper[26425]: I0217 15:36:07.483867 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:36:07.491227 master-0 kubenswrapper[26425]: I0217 15:36:07.491165 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:36:07.723955 master-0 kubenswrapper[26425]: I0217 15:36:07.723874 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:36:07.806560 master-0 kubenswrapper[26425]: I0217 15:36:07.806237 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-eu11557dmf9qt" Feb 17 15:36:07.854209 master-0 kubenswrapper[26425]: I0217 15:36:07.854140 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:36:07.998097 master-0 kubenswrapper[26425]: I0217 15:36:07.997920 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:36:08.014650 master-0 kubenswrapper[26425]: I0217 15:36:08.014556 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 15:36:08.027030 master-0 kubenswrapper[26425]: I0217 15:36:08.026969 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-7hvks" Feb 17 15:36:08.111644 master-0 kubenswrapper[26425]: I0217 15:36:08.111564 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:36:08.112055 master-0 kubenswrapper[26425]: I0217 15:36:08.111565 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:36:08.355575 master-0 kubenswrapper[26425]: I0217 15:36:08.355392 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:36:08.406856 master-0 kubenswrapper[26425]: I0217 15:36:08.406793 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:36:08.600198 master-0 kubenswrapper[26425]: I0217 15:36:08.600070 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:36:08.667387 master-0 kubenswrapper[26425]: I0217 15:36:08.667337 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:36:08.691517 master-0 kubenswrapper[26425]: I0217 15:36:08.691433 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:36:08.728835 master-0 kubenswrapper[26425]: I0217 15:36:08.728759 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:36:08.862296 master-0 kubenswrapper[26425]: I0217 15:36:08.862225 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 17 15:36:08.888003 master-0 kubenswrapper[26425]: I0217 15:36:08.887924 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-qjmzn" Feb 17 15:36:08.908922 master-0 kubenswrapper[26425]: I0217 15:36:08.908847 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:36:08.930647 master-0 kubenswrapper[26425]: I0217 15:36:08.930518 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:36:08.991783 master-0 kubenswrapper[26425]: I0217 15:36:08.991726 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-dz667" Feb 17 15:36:09.050566 master-0 kubenswrapper[26425]: I0217 15:36:09.050501 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:36:09.102548 master-0 kubenswrapper[26425]: I0217 15:36:09.096679 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-4cctd" Feb 17 15:36:09.121996 master-0 kubenswrapper[26425]: I0217 15:36:09.121933 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:36:09.333537 master-0 kubenswrapper[26425]: I0217 15:36:09.333353 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 17 15:36:09.376481 master-0 kubenswrapper[26425]: I0217 15:36:09.376384 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:36:09.432272 master-0 kubenswrapper[26425]: I0217 15:36:09.432176 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 15:36:09.436631 master-0 kubenswrapper[26425]: I0217 15:36:09.436569 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 15:36:09.442221 master-0 kubenswrapper[26425]: I0217 15:36:09.442161 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:36:09.445066 master-0 kubenswrapper[26425]: I0217 15:36:09.445000 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 17 15:36:09.662409 master-0 kubenswrapper[26425]: I0217 15:36:09.662327 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:36:09.736003 master-0 kubenswrapper[26425]: I0217 15:36:09.735920 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:36:09.758157 master-0 kubenswrapper[26425]: I0217 15:36:09.758083 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:36:09.885815 master-0 kubenswrapper[26425]: I0217 15:36:09.885726 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7f2w9" Feb 17 15:36:09.946402 master-0 kubenswrapper[26425]: I0217 15:36:09.946206 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:36:10.005863 master-0 kubenswrapper[26425]: I0217 15:36:10.005781 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 17 15:36:10.066548 master-0 kubenswrapper[26425]: I0217 15:36:10.066431 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:36:10.096019 master-0 kubenswrapper[26425]: I0217 15:36:10.095944 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:36:10.162998 master-0 kubenswrapper[26425]: I0217 15:36:10.162917 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:36:10.262384 master-0 kubenswrapper[26425]: I0217 15:36:10.261674 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nsg9z" Feb 17 15:36:10.305861 master-0 kubenswrapper[26425]: I0217 15:36:10.305781 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:36:10.305861 master-0 kubenswrapper[26425]: I0217 15:36:10.305851 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:36:10.427126 master-0 kubenswrapper[26425]: I0217 15:36:10.427032 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:36:10.602010 master-0 kubenswrapper[26425]: I0217 15:36:10.601847 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:36:10.655443 master-0 kubenswrapper[26425]: I0217 15:36:10.655349 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:36:10.735878 master-0 kubenswrapper[26425]: I0217 15:36:10.735791 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:36:10.759701 master-0 kubenswrapper[26425]: I0217 15:36:10.759578 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:36:10.843535 master-0 kubenswrapper[26425]: I0217 15:36:10.843444 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:36:10.844065 master-0 kubenswrapper[26425]: E0217 15:36:10.843692 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:36:10.844290 master-0 kubenswrapper[26425]: E0217 15:36:10.844254 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:36:10.844615 master-0 kubenswrapper[26425]: E0217 15:36:10.844578 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:38:12.844541198 +0000 UTC m=+1354.736265046 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:36:10.896944 master-0 kubenswrapper[26425]: I0217 15:36:10.896846 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 17 15:36:10.976426 master-0 kubenswrapper[26425]: I0217 15:36:10.976298 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 15:36:10.980850 master-0 kubenswrapper[26425]: I0217 15:36:10.980750 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:36:11.062806 master-0 kubenswrapper[26425]: I0217 15:36:11.062735 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:36:11.065334 master-0 kubenswrapper[26425]: I0217 15:36:11.065297 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:36:11.084957 master-0 kubenswrapper[26425]: I0217 15:36:11.084897 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 17 15:36:11.129700 master-0 kubenswrapper[26425]: I0217 15:36:11.129590 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-v49sf" Feb 17 15:36:11.161756 master-0 kubenswrapper[26425]: I0217 15:36:11.161623 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:36:11.183168 master-0 kubenswrapper[26425]: I0217 15:36:11.183119 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-bw92c" Feb 17 15:36:11.195651 master-0 kubenswrapper[26425]: I0217 15:36:11.195579 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:36:11.222798 master-0 kubenswrapper[26425]: I0217 15:36:11.222703 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 15:36:11.389830 master-0 kubenswrapper[26425]: I0217 15:36:11.389755 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 15:36:11.391347 master-0 kubenswrapper[26425]: I0217 15:36:11.391280 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:36:11.400867 master-0 kubenswrapper[26425]: I0217 15:36:11.400827 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:36:11.532763 master-0 kubenswrapper[26425]: I0217 15:36:11.532627 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-8l4dg" Feb 17 15:36:11.537327 master-0 kubenswrapper[26425]: I0217 15:36:11.537267 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:36:11.547270 master-0 kubenswrapper[26425]: I0217 15:36:11.547225 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 15:36:11.604006 master-0 kubenswrapper[26425]: I0217 15:36:11.603960 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:36:11.604628 master-0 kubenswrapper[26425]: I0217 15:36:11.604538 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:36:11.634740 master-0 kubenswrapper[26425]: I0217 15:36:11.634677 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 15:36:11.650072 master-0 kubenswrapper[26425]: I0217 15:36:11.649995 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 15:36:11.656056 master-0 kubenswrapper[26425]: I0217 15:36:11.655993 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:36:11.658261 master-0 kubenswrapper[26425]: I0217 15:36:11.658207 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:36:11.707522 master-0 kubenswrapper[26425]: I0217 15:36:11.707479 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jd7jr" Feb 17 15:36:11.730857 master-0 kubenswrapper[26425]: I0217 15:36:11.730830 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-r65rc" Feb 17 15:36:11.765714 master-0 kubenswrapper[26425]: I0217 15:36:11.765685 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:36:11.775579 master-0 kubenswrapper[26425]: I0217 15:36:11.775436 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:36:11.820930 master-0 kubenswrapper[26425]: I0217 15:36:11.820535 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:36:12.025038 master-0 kubenswrapper[26425]: I0217 15:36:12.024969 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 15:36:12.041684 master-0 kubenswrapper[26425]: I0217 15:36:12.041592 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:36:12.105704 master-0 kubenswrapper[26425]: I0217 15:36:12.105443 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 15:36:12.128980 master-0 kubenswrapper[26425]: I0217 15:36:12.128887 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:36:12.243198 master-0 kubenswrapper[26425]: I0217 15:36:12.243112 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-tkkqz" Feb 17 15:36:12.308975 master-0 kubenswrapper[26425]: I0217 15:36:12.308836 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 17 15:36:12.465739 master-0 kubenswrapper[26425]: I0217 15:36:12.463524 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 15:36:12.465739 master-0 kubenswrapper[26425]: I0217 15:36:12.463998 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:36:12.466113 master-0 kubenswrapper[26425]: I0217 15:36:12.465509 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:36:12.475754 master-0 kubenswrapper[26425]: I0217 15:36:12.475678 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 17 15:36:12.526416 master-0 kubenswrapper[26425]: I0217 15:36:12.526316 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gbdz4" Feb 17 15:36:12.599387 master-0 kubenswrapper[26425]: I0217 15:36:12.599300 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:36:12.613309 master-0 kubenswrapper[26425]: I0217 15:36:12.613205 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 15:36:12.620659 master-0 kubenswrapper[26425]: I0217 15:36:12.620605 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:36:12.663603 master-0 kubenswrapper[26425]: I0217 15:36:12.663522 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:36:12.714910 master-0 kubenswrapper[26425]: I0217 15:36:12.714794 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:36:12.721536 master-0 kubenswrapper[26425]: I0217 15:36:12.721385 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:36:12.763756 master-0 kubenswrapper[26425]: I0217 15:36:12.763669 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:36:12.782809 master-0 kubenswrapper[26425]: I0217 15:36:12.782712 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:36:12.790865 master-0 kubenswrapper[26425]: I0217 15:36:12.790815 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:36:12.839803 master-0 kubenswrapper[26425]: I0217 15:36:12.839751 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:36:12.879001 master-0 kubenswrapper[26425]: I0217 15:36:12.878946 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:36:12.882325 master-0 kubenswrapper[26425]: I0217 15:36:12.882287 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:36:12.901470 master-0 kubenswrapper[26425]: I0217 15:36:12.900502 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 17 15:36:12.918633 master-0 kubenswrapper[26425]: I0217 15:36:12.918568 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 15:36:12.922347 master-0 kubenswrapper[26425]: I0217 15:36:12.922276 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:36:13.034614 master-0 kubenswrapper[26425]: I0217 15:36:13.033065 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:36:13.070077 master-0 kubenswrapper[26425]: I0217 15:36:13.070017 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:36:13.072485 master-0 kubenswrapper[26425]: I0217 15:36:13.072405 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dxkwv" Feb 17 15:36:13.094355 master-0 kubenswrapper[26425]: I0217 15:36:13.093724 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 15:36:13.143184 master-0 kubenswrapper[26425]: I0217 15:36:13.143106 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 17 15:36:13.238795 master-0 kubenswrapper[26425]: I0217 15:36:13.238717 26425 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:36:13.371036 master-0 kubenswrapper[26425]: I0217 15:36:13.370984 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:36:13.395994 master-0 kubenswrapper[26425]: I0217 15:36:13.395939 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:36:13.432065 master-0 kubenswrapper[26425]: I0217 15:36:13.432010 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zhjq" Feb 17 15:36:13.475805 master-0 kubenswrapper[26425]: I0217 15:36:13.475756 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 17 15:36:13.531975 master-0 kubenswrapper[26425]: I0217 15:36:13.531936 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:36:13.556729 master-0 kubenswrapper[26425]: I0217 15:36:13.556670 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:36:13.576263 master-0 kubenswrapper[26425]: I0217 15:36:13.576211 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 17 15:36:13.699428 master-0 kubenswrapper[26425]: I0217 15:36:13.699322 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 15:36:13.716089 master-0 kubenswrapper[26425]: I0217 15:36:13.716041 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:36:13.804607 master-0 kubenswrapper[26425]: I0217 15:36:13.804543 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:36:13.864231 master-0 kubenswrapper[26425]: I0217 15:36:13.864134 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 17 15:36:13.970067 master-0 kubenswrapper[26425]: I0217 15:36:13.969930 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:36:13.997604 master-0 kubenswrapper[26425]: I0217 15:36:13.997543 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:36:14.067597 master-0 kubenswrapper[26425]: I0217 15:36:14.067538 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 17 15:36:14.210336 master-0 kubenswrapper[26425]: I0217 15:36:14.210273 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:36:14.213917 master-0 kubenswrapper[26425]: I0217 15:36:14.213858 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 17 15:36:14.214640 master-0 kubenswrapper[26425]: I0217 15:36:14.214595 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:36:14.217781 master-0 kubenswrapper[26425]: I0217 15:36:14.217732 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:36:14.259372 master-0 kubenswrapper[26425]: I0217 15:36:14.259219 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 17 15:36:14.313844 master-0 kubenswrapper[26425]: I0217 15:36:14.313780 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:36:14.332040 master-0 kubenswrapper[26425]: I0217 15:36:14.331995 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 15:36:14.404425 master-0 kubenswrapper[26425]: I0217 15:36:14.404362 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:36:14.423348 master-0 kubenswrapper[26425]: I0217 15:36:14.423242 26425 patch_prober.go:28] interesting pod/console-6f45cc898f-z9tb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Feb 17 15:36:14.423348 master-0 kubenswrapper[26425]: I0217 15:36:14.423302 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Feb 17 15:36:14.516339 master-0 kubenswrapper[26425]: I0217 15:36:14.516174 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:36:14.593828 master-0 kubenswrapper[26425]: I0217 15:36:14.593756 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 15:36:14.608929 master-0 kubenswrapper[26425]: I0217 15:36:14.608849 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:36:14.654620 master-0 kubenswrapper[26425]: I0217 15:36:14.654542 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:36:14.659929 master-0 kubenswrapper[26425]: I0217 15:36:14.659867 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:36:14.803099 master-0 kubenswrapper[26425]: I0217 15:36:14.802890 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-kcv7p" Feb 17 15:36:14.852331 master-0 kubenswrapper[26425]: I0217 15:36:14.852218 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:36:14.886970 master-0 kubenswrapper[26425]: I0217 15:36:14.886891 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:36:14.893117 master-0 kubenswrapper[26425]: I0217 15:36:14.893055 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:36:14.995768 master-0 kubenswrapper[26425]: I0217 15:36:14.995654 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 17 15:36:15.000303 master-0 kubenswrapper[26425]: I0217 15:36:15.000222 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:36:15.090523 master-0 kubenswrapper[26425]: I0217 15:36:15.090310 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 15:36:15.102105 master-0 kubenswrapper[26425]: I0217 15:36:15.102041 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:36:15.113317 master-0 kubenswrapper[26425]: I0217 15:36:15.113252 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:36:15.156187 master-0 kubenswrapper[26425]: I0217 15:36:15.156116 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 17 15:36:15.162544 master-0 kubenswrapper[26425]: I0217 15:36:15.162085 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:36:15.186047 master-0 kubenswrapper[26425]: I0217 15:36:15.185985 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:36:15.196914 master-0 kubenswrapper[26425]: I0217 15:36:15.196865 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:36:15.281891 master-0 kubenswrapper[26425]: I0217 15:36:15.281794 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 15:36:15.325733 master-0 kubenswrapper[26425]: I0217 15:36:15.325653 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:36:15.454374 master-0 kubenswrapper[26425]: I0217 15:36:15.454318 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kjdkm" Feb 17 15:36:15.454952 master-0 kubenswrapper[26425]: I0217 15:36:15.454903 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mv24c" Feb 17 15:36:15.463895 master-0 kubenswrapper[26425]: I0217 15:36:15.463853 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-t5n74" Feb 17 15:36:15.465434 master-0 kubenswrapper[26425]: I0217 15:36:15.465391 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:36:15.567779 master-0 kubenswrapper[26425]: I0217 15:36:15.567693 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:36:15.595218 master-0 kubenswrapper[26425]: I0217 15:36:15.595150 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 17 15:36:15.601292 master-0 kubenswrapper[26425]: I0217 15:36:15.601242 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7d1hat1ob2dke" Feb 17 15:36:15.659502 master-0 kubenswrapper[26425]: I0217 15:36:15.659413 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 17 15:36:15.864570 master-0 kubenswrapper[26425]: I0217 15:36:15.864442 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:36:15.876017 master-0 kubenswrapper[26425]: I0217 15:36:15.875958 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:36:16.040530 master-0 kubenswrapper[26425]: I0217 15:36:16.040435 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 15:36:16.086927 master-0 kubenswrapper[26425]: I0217 15:36:16.086834 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:36:16.178484 master-0 kubenswrapper[26425]: I0217 15:36:16.178354 26425 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:36:16.202190 master-0 kubenswrapper[26425]: I0217 15:36:16.202105 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:36:16.202414 master-0 kubenswrapper[26425]: I0217 15:36:16.202215 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 17 15:36:16.202829 master-0 kubenswrapper[26425]: I0217 15:36:16.202773 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:36:16.202829 master-0 kubenswrapper[26425]: I0217 15:36:16.202820 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bda1ba3d-2d22-4649-960d-cedcfe10f75c" Feb 17 15:36:16.209384 master-0 kubenswrapper[26425]: I0217 15:36:16.209332 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:36:16.234807 master-0 kubenswrapper[26425]: I0217 15:36:16.234711 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=19.234687642 podStartE2EDuration="19.234687642s" podCreationTimestamp="2026-02-17 15:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:36:16.226829804 +0000 UTC m=+1238.118553632" watchObservedRunningTime="2026-02-17 15:36:16.234687642 +0000 UTC m=+1238.126411470" Feb 17 15:36:16.258951 master-0 kubenswrapper[26425]: I0217 15:36:16.258888 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 15:36:16.294313 master-0 kubenswrapper[26425]: I0217 15:36:16.294261 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 15:36:16.339712 master-0 kubenswrapper[26425]: I0217 15:36:16.339624 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:36:16.408790 master-0 kubenswrapper[26425]: I0217 15:36:16.408717 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 17 15:36:16.456838 master-0 kubenswrapper[26425]: I0217 15:36:16.456695 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 17 15:36:16.475902 master-0 kubenswrapper[26425]: I0217 15:36:16.475813 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 17 15:36:16.516691 master-0 kubenswrapper[26425]: I0217 15:36:16.516640 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:36:16.629974 master-0 kubenswrapper[26425]: I0217 15:36:16.629930 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 15:36:16.675269 master-0 kubenswrapper[26425]: I0217 15:36:16.675181 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:36:16.763932 master-0 kubenswrapper[26425]: I0217 15:36:16.763821 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 17 15:36:16.769023 master-0 kubenswrapper[26425]: I0217 15:36:16.768977 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 17 15:36:16.835148 master-0 kubenswrapper[26425]: I0217 15:36:16.835115 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:36:16.865562 master-0 kubenswrapper[26425]: I0217 15:36:16.865500 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 17 15:36:16.871440 master-0 kubenswrapper[26425]: I0217 15:36:16.871391 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:36:16.887244 master-0 kubenswrapper[26425]: I0217 15:36:16.887200 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:36:16.888591 master-0 kubenswrapper[26425]: I0217 15:36:16.888543 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:36:16.950218 master-0 kubenswrapper[26425]: I0217 15:36:16.950170 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:36:16.954834 master-0 kubenswrapper[26425]: I0217 15:36:16.954780 26425 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:36:16.986354 master-0 kubenswrapper[26425]: I0217 15:36:16.986236 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:36:17.038952 master-0 kubenswrapper[26425]: I0217 15:36:17.038806 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:36:17.049067 master-0 kubenswrapper[26425]: I0217 15:36:17.049006 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:36:17.063354 master-0 kubenswrapper[26425]: I0217 15:36:17.063283 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:36:17.115823 master-0 kubenswrapper[26425]: I0217 15:36:17.115766 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 17 15:36:17.155273 master-0 kubenswrapper[26425]: I0217 15:36:17.155186 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:36:17.160756 master-0 kubenswrapper[26425]: I0217 15:36:17.160709 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:36:17.252605 master-0 kubenswrapper[26425]: I0217 15:36:17.252546 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:36:17.324504 master-0 kubenswrapper[26425]: I0217 15:36:17.324375 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 17 15:36:17.331350 master-0 kubenswrapper[26425]: I0217 15:36:17.331310 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 15:36:17.341646 master-0 kubenswrapper[26425]: I0217 15:36:17.341530 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:36:17.349519 master-0 kubenswrapper[26425]: I0217 15:36:17.349452 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:36:17.354525 master-0 kubenswrapper[26425]: I0217 15:36:17.354482 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:36:17.455218 master-0 kubenswrapper[26425]: I0217 15:36:17.455152 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 15:36:17.522803 master-0 kubenswrapper[26425]: I0217 15:36:17.522729 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:36:17.534847 master-0 kubenswrapper[26425]: I0217 15:36:17.534806 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:36:17.544638 master-0 kubenswrapper[26425]: I0217 15:36:17.544563 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:36:17.546482 master-0 kubenswrapper[26425]: I0217 15:36:17.546398 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:36:17.569367 master-0 kubenswrapper[26425]: I0217 15:36:17.569282 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:36:17.596260 master-0 kubenswrapper[26425]: I0217 15:36:17.596073 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8lvkh" Feb 17 15:36:17.659437 master-0 kubenswrapper[26425]: I0217 15:36:17.659330 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 17 15:36:17.729160 master-0 kubenswrapper[26425]: I0217 15:36:17.729086 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:36:17.734049 master-0 kubenswrapper[26425]: I0217 15:36:17.734010 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:36:17.751935 master-0 kubenswrapper[26425]: I0217 15:36:17.751858 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:36:17.781206 master-0 kubenswrapper[26425]: I0217 15:36:17.781122 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:36:17.788911 master-0 kubenswrapper[26425]: I0217 15:36:17.788852 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-c8lzf" Feb 17 15:36:17.855496 master-0 kubenswrapper[26425]: I0217 15:36:17.855336 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:36:17.868740 master-0 kubenswrapper[26425]: I0217 15:36:17.868636 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 17 15:36:17.899595 master-0 kubenswrapper[26425]: I0217 15:36:17.899517 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:36:17.904252 master-0 kubenswrapper[26425]: I0217 15:36:17.904192 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:36:17.968867 master-0 kubenswrapper[26425]: I0217 15:36:17.966696 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-q9xjb" Feb 17 15:36:17.979080 master-0 kubenswrapper[26425]: I0217 15:36:17.979019 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:36:17.988489 master-0 kubenswrapper[26425]: I0217 15:36:17.987127 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 17 15:36:18.036390 master-0 kubenswrapper[26425]: I0217 15:36:18.036297 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:36:18.083841 master-0 kubenswrapper[26425]: I0217 15:36:18.083761 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 17 15:36:18.107788 master-0 kubenswrapper[26425]: I0217 15:36:18.107665 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:36:18.147590 master-0 kubenswrapper[26425]: I0217 15:36:18.147517 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lgxgp" Feb 17 15:36:18.212613 master-0 kubenswrapper[26425]: I0217 15:36:18.212517 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:36:18.214057 master-0 kubenswrapper[26425]: I0217 15:36:18.214015 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 15:36:18.305275 master-0 kubenswrapper[26425]: I0217 15:36:18.305201 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 17 15:36:18.312276 master-0 kubenswrapper[26425]: I0217 15:36:18.312224 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:36:18.325981 master-0 kubenswrapper[26425]: I0217 15:36:18.325942 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:36:18.338204 master-0 kubenswrapper[26425]: I0217 15:36:18.338166 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-fg558" Feb 17 15:36:18.343158 master-0 kubenswrapper[26425]: I0217 15:36:18.343092 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:36:18.436151 master-0 kubenswrapper[26425]: I0217 15:36:18.436069 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:36:18.447565 master-0 kubenswrapper[26425]: I0217 15:36:18.447451 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:36:18.505992 master-0 kubenswrapper[26425]: I0217 15:36:18.505899 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:36:18.577103 master-0 kubenswrapper[26425]: I0217 15:36:18.576978 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:36:18.662491 master-0 kubenswrapper[26425]: I0217 15:36:18.662386 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pv4xc" Feb 17 15:36:18.722187 master-0 kubenswrapper[26425]: I0217 15:36:18.722025 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 17 15:36:18.783285 master-0 kubenswrapper[26425]: I0217 15:36:18.783225 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 15:36:18.845405 master-0 kubenswrapper[26425]: I0217 15:36:18.845337 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:36:18.920525 master-0 kubenswrapper[26425]: I0217 15:36:18.920444 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:36:18.943806 master-0 kubenswrapper[26425]: I0217 15:36:18.943682 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 15:36:18.970189 master-0 kubenswrapper[26425]: I0217 15:36:18.970108 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:36:18.978582 master-0 kubenswrapper[26425]: I0217 15:36:18.978403 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 17 15:36:19.047730 master-0 kubenswrapper[26425]: I0217 15:36:19.047662 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:36:19.118229 master-0 kubenswrapper[26425]: I0217 15:36:19.118151 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 15:36:19.223551 master-0 kubenswrapper[26425]: I0217 15:36:19.223492 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:36:19.252743 master-0 kubenswrapper[26425]: I0217 15:36:19.252368 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 17 15:36:19.269578 master-0 kubenswrapper[26425]: I0217 15:36:19.269528 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:36:19.293891 master-0 kubenswrapper[26425]: I0217 15:36:19.293836 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:36:19.316266 master-0 kubenswrapper[26425]: I0217 15:36:19.316191 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:36:19.323751 master-0 kubenswrapper[26425]: I0217 15:36:19.323688 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:36:19.422842 master-0 kubenswrapper[26425]: I0217 15:36:19.422778 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-tphvr" Feb 17 15:36:19.441051 master-0 kubenswrapper[26425]: I0217 15:36:19.440843 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 15:36:19.446631 master-0 kubenswrapper[26425]: I0217 15:36:19.446577 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:36:19.538612 master-0 kubenswrapper[26425]: I0217 15:36:19.538420 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:36:19.553193 master-0 kubenswrapper[26425]: I0217 15:36:19.553157 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:36:19.578037 master-0 kubenswrapper[26425]: I0217 15:36:19.577970 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:36:19.616544 master-0 kubenswrapper[26425]: I0217 15:36:19.616436 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtqvr" Feb 17 15:36:19.644670 master-0 kubenswrapper[26425]: I0217 15:36:19.642132 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:36:19.687142 master-0 kubenswrapper[26425]: I0217 15:36:19.687036 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:36:19.701028 master-0 kubenswrapper[26425]: I0217 15:36:19.700767 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8gftr" Feb 17 15:36:19.725804 master-0 kubenswrapper[26425]: I0217 15:36:19.725737 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 17 15:36:19.726059 master-0 kubenswrapper[26425]: I0217 15:36:19.726020 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" containerID="cri-o://57f48d420864783db4edfc9ba02b2310d3831fce9444e0d9d3ef25b5546d0f41" gracePeriod=5 Feb 17 15:36:19.735113 master-0 kubenswrapper[26425]: I0217 15:36:19.735065 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 15:36:19.736629 master-0 kubenswrapper[26425]: I0217 15:36:19.736452 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:36:19.760122 master-0 kubenswrapper[26425]: I0217 15:36:19.760057 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:36:19.786830 master-0 kubenswrapper[26425]: I0217 15:36:19.786748 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-bt8x4" Feb 17 15:36:19.820698 master-0 kubenswrapper[26425]: I0217 15:36:19.820596 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 15:36:19.875892 master-0 kubenswrapper[26425]: I0217 15:36:19.875845 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:36:19.879709 master-0 kubenswrapper[26425]: I0217 15:36:19.879685 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:36:19.904237 master-0 kubenswrapper[26425]: I0217 15:36:19.904184 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:36:19.972909 master-0 kubenswrapper[26425]: I0217 15:36:19.972843 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 17 15:36:20.004102 master-0 kubenswrapper[26425]: I0217 15:36:20.004035 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 15:36:20.034957 master-0 kubenswrapper[26425]: I0217 15:36:20.034838 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 17 15:36:20.060607 master-0 kubenswrapper[26425]: I0217 15:36:20.060541 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 17 15:36:20.123222 master-0 kubenswrapper[26425]: I0217 15:36:20.123144 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 15:36:20.175978 master-0 kubenswrapper[26425]: I0217 15:36:20.175902 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:36:20.176404 master-0 kubenswrapper[26425]: I0217 15:36:20.176353 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 15:36:20.193488 master-0 kubenswrapper[26425]: I0217 15:36:20.193419 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:36:20.204787 master-0 kubenswrapper[26425]: I0217 15:36:20.204712 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:36:20.218115 master-0 kubenswrapper[26425]: I0217 15:36:20.218026 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:36:20.242112 master-0 kubenswrapper[26425]: I0217 15:36:20.242019 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-4gx6p" Feb 17 15:36:20.313440 master-0 kubenswrapper[26425]: I0217 15:36:20.313355 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:36:20.319909 master-0 kubenswrapper[26425]: I0217 15:36:20.319815 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:36:20.365905 master-0 kubenswrapper[26425]: I0217 15:36:20.365813 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2tsl8" Feb 17 15:36:20.407570 master-0 kubenswrapper[26425]: I0217 15:36:20.405843 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:36:20.475174 master-0 kubenswrapper[26425]: I0217 15:36:20.474756 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:36:20.489497 master-0 kubenswrapper[26425]: I0217 15:36:20.488551 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-flbia8i8i4eih" Feb 17 15:36:20.595602 master-0 kubenswrapper[26425]: I0217 15:36:20.595535 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6c645" Feb 17 15:36:20.599858 master-0 kubenswrapper[26425]: I0217 15:36:20.599816 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:36:20.850688 master-0 kubenswrapper[26425]: I0217 15:36:20.850524 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:36:20.920624 master-0 kubenswrapper[26425]: I0217 15:36:20.920544 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:36:20.935241 master-0 kubenswrapper[26425]: I0217 15:36:20.932309 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:36:21.119183 master-0 kubenswrapper[26425]: I0217 15:36:21.119103 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 17 15:36:21.325998 master-0 kubenswrapper[26425]: I0217 15:36:21.325928 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 17 15:36:21.384583 master-0 kubenswrapper[26425]: I0217 15:36:21.384472 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:36:21.396337 master-0 kubenswrapper[26425]: I0217 15:36:21.396291 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 15:36:21.407518 master-0 kubenswrapper[26425]: I0217 15:36:21.407481 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:36:21.414037 master-0 kubenswrapper[26425]: I0217 15:36:21.414005 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:36:21.449312 master-0 kubenswrapper[26425]: I0217 15:36:21.449250 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:36:21.494999 master-0 kubenswrapper[26425]: I0217 15:36:21.494904 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:36:21.562374 master-0 kubenswrapper[26425]: I0217 15:36:21.562289 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:36:21.568061 master-0 kubenswrapper[26425]: I0217 15:36:21.567988 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:36:21.685491 master-0 kubenswrapper[26425]: I0217 15:36:21.685299 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:36:21.764537 master-0 kubenswrapper[26425]: I0217 15:36:21.764405 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:36:21.782093 master-0 kubenswrapper[26425]: I0217 15:36:21.782038 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:36:21.809310 master-0 kubenswrapper[26425]: I0217 15:36:21.809229 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:36:21.835100 master-0 kubenswrapper[26425]: I0217 15:36:21.835023 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 15:36:21.847574 master-0 kubenswrapper[26425]: I0217 15:36:21.847512 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:36:21.859295 master-0 kubenswrapper[26425]: I0217 15:36:21.859254 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:36:21.881206 master-0 kubenswrapper[26425]: I0217 15:36:21.880917 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:36:21.968670 master-0 kubenswrapper[26425]: I0217 15:36:21.968163 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:36:22.042358 master-0 kubenswrapper[26425]: I0217 15:36:22.042291 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:36:22.071953 master-0 kubenswrapper[26425]: I0217 15:36:22.071887 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 15:36:22.101597 master-0 kubenswrapper[26425]: I0217 15:36:22.101540 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dkdg8" Feb 17 15:36:22.132746 master-0 kubenswrapper[26425]: I0217 15:36:22.132652 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-44zht" Feb 17 15:36:22.148148 master-0 kubenswrapper[26425]: I0217 15:36:22.148069 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:36:22.182284 master-0 kubenswrapper[26425]: I0217 15:36:22.182212 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:36:22.426049 master-0 kubenswrapper[26425]: I0217 15:36:22.425979 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t9g75" Feb 17 15:36:22.453016 master-0 kubenswrapper[26425]: I0217 15:36:22.451327 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 15:36:22.609428 master-0 kubenswrapper[26425]: I0217 15:36:22.609352 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:36:22.694606 master-0 kubenswrapper[26425]: I0217 15:36:22.694417 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:36:22.777245 master-0 kubenswrapper[26425]: I0217 15:36:22.777166 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4h7qp" Feb 17 15:36:22.932737 master-0 kubenswrapper[26425]: I0217 15:36:22.932608 26425 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:36:22.938061 master-0 kubenswrapper[26425]: I0217 15:36:22.937978 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-kt686" Feb 17 15:36:23.002228 master-0 kubenswrapper[26425]: I0217 15:36:23.002033 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dzmf4" Feb 17 15:36:23.057750 master-0 kubenswrapper[26425]: I0217 15:36:23.057680 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:36:23.293434 master-0 kubenswrapper[26425]: I0217 15:36:23.293272 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 17 15:36:23.403562 master-0 kubenswrapper[26425]: I0217 15:36:23.403507 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 17 15:36:23.454767 master-0 kubenswrapper[26425]: I0217 15:36:23.454712 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 15:36:23.658723 master-0 kubenswrapper[26425]: I0217 15:36:23.658672 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:36:23.937035 master-0 kubenswrapper[26425]: I0217 15:36:23.936876 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-jf6tv" Feb 17 15:36:24.266938 master-0 kubenswrapper[26425]: I0217 15:36:24.266790 26425 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:36:24.428279 master-0 kubenswrapper[26425]: I0217 15:36:24.428154 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:36:24.432888 master-0 kubenswrapper[26425]: I0217 15:36:24.432843 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:36:24.541595 master-0 kubenswrapper[26425]: I0217 15:36:24.541059 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 15:36:24.544596 master-0 kubenswrapper[26425]: I0217 15:36:24.543717 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:36:25.182297 master-0 kubenswrapper[26425]: I0217 15:36:25.182241 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 17 15:36:25.182687 master-0 kubenswrapper[26425]: I0217 15:36:25.182331 26425 generic.go:334] "Generic (PLEG): container finished" podID="a9a2b3a37af32e5d570b82bfd956f250" containerID="57f48d420864783db4edfc9ba02b2310d3831fce9444e0d9d3ef25b5546d0f41" exitCode=137 Feb 17 15:36:25.327023 master-0 kubenswrapper[26425]: I0217 15:36:25.326959 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 17 15:36:25.327269 master-0 kubenswrapper[26425]: I0217 15:36:25.327057 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:36:25.413722 master-0 kubenswrapper[26425]: I0217 15:36:25.413637 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 17 15:36:25.414034 master-0 kubenswrapper[26425]: I0217 15:36:25.413761 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 17 15:36:25.414034 master-0 kubenswrapper[26425]: I0217 15:36:25.413825 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 17 15:36:25.414034 master-0 kubenswrapper[26425]: I0217 15:36:25.413882 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 17 15:36:25.414034 master-0 kubenswrapper[26425]: I0217 15:36:25.413923 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 17 15:36:25.414034 master-0 kubenswrapper[26425]: I0217 15:36:25.413969 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests" (OuterVolumeSpecName: "manifests") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:36:25.414357 master-0 kubenswrapper[26425]: I0217 15:36:25.414046 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log" (OuterVolumeSpecName: "var-log") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:36:25.414357 master-0 kubenswrapper[26425]: I0217 15:36:25.414089 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:36:25.414357 master-0 kubenswrapper[26425]: I0217 15:36:25.414150 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock" (OuterVolumeSpecName: "var-lock") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:36:25.414629 master-0 kubenswrapper[26425]: I0217 15:36:25.414512 26425 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:25.414629 master-0 kubenswrapper[26425]: I0217 15:36:25.414538 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:25.414629 master-0 kubenswrapper[26425]: I0217 15:36:25.414557 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:25.414629 master-0 kubenswrapper[26425]: I0217 15:36:25.414575 26425 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:25.421950 master-0 kubenswrapper[26425]: I0217 15:36:25.421887 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:36:25.518104 master-0 kubenswrapper[26425]: I0217 15:36:25.517640 26425 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:26.194227 master-0 kubenswrapper[26425]: I0217 15:36:26.194078 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 17 15:36:26.194604 master-0 kubenswrapper[26425]: I0217 15:36:26.194259 26425 scope.go:117] "RemoveContainer" containerID="57f48d420864783db4edfc9ba02b2310d3831fce9444e0d9d3ef25b5546d0f41" Feb 17 15:36:26.194604 master-0 kubenswrapper[26425]: I0217 15:36:26.194352 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 17 15:36:26.409930 master-0 kubenswrapper[26425]: I0217 15:36:26.409827 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a2b3a37af32e5d570b82bfd956f250" path="/var/lib/kubelet/pods/a9a2b3a37af32e5d570b82bfd956f250/volumes" Feb 17 15:36:49.597257 master-0 kubenswrapper[26425]: I0217 15:36:49.597140 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" containerID="cri-o://62bc0a47ef7fb54261a0ebfba7d1d86c84145d8edec6583defa98ae636c4a32e" gracePeriod=15 Feb 17 15:36:50.306389 master-0 kubenswrapper[26425]: I0217 15:36:50.306325 26425 patch_prober.go:28] interesting pod/console-55495f9f9c-p58l5 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Feb 17 15:36:50.306874 master-0 kubenswrapper[26425]: I0217 15:36:50.306823 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-55495f9f9c-p58l5" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Feb 17 15:36:52.436414 master-0 kubenswrapper[26425]: I0217 15:36:52.436335 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55495f9f9c-p58l5_25188d19-3aa1-4346-8547-d571600db2f6/console/0.log" Feb 17 15:36:52.437084 master-0 kubenswrapper[26425]: I0217 15:36:52.436424 26425 generic.go:334] "Generic (PLEG): container finished" podID="25188d19-3aa1-4346-8547-d571600db2f6" containerID="62bc0a47ef7fb54261a0ebfba7d1d86c84145d8edec6583defa98ae636c4a32e" exitCode=2 Feb 17 15:36:52.437084 master-0 kubenswrapper[26425]: I0217 15:36:52.436489 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55495f9f9c-p58l5" event={"ID":"25188d19-3aa1-4346-8547-d571600db2f6","Type":"ContainerDied","Data":"62bc0a47ef7fb54261a0ebfba7d1d86c84145d8edec6583defa98ae636c4a32e"} Feb 17 15:36:52.792420 master-0 kubenswrapper[26425]: I0217 15:36:52.792336 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55495f9f9c-p58l5_25188d19-3aa1-4346-8547-d571600db2f6/console/0.log" Feb 17 15:36:52.792742 master-0 kubenswrapper[26425]: I0217 15:36:52.792713 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:36:52.941036 master-0 kubenswrapper[26425]: I0217 15:36:52.940911 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t8cs\" (UniqueName: \"kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941345 master-0 kubenswrapper[26425]: I0217 15:36:52.941240 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941492 master-0 kubenswrapper[26425]: I0217 15:36:52.941423 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941578 master-0 kubenswrapper[26425]: I0217 15:36:52.941520 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941648 master-0 kubenswrapper[26425]: I0217 15:36:52.941587 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941747 master-0 kubenswrapper[26425]: I0217 15:36:52.941669 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.941849 master-0 kubenswrapper[26425]: I0217 15:36:52.941752 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle\") pod \"25188d19-3aa1-4346-8547-d571600db2f6\" (UID: \"25188d19-3aa1-4346-8547-d571600db2f6\") " Feb 17 15:36:52.942804 master-0 kubenswrapper[26425]: I0217 15:36:52.942241 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config" (OuterVolumeSpecName: "console-config") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:36:52.942804 master-0 kubenswrapper[26425]: I0217 15:36:52.942333 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:36:52.942804 master-0 kubenswrapper[26425]: I0217 15:36:52.942722 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:36:52.943119 master-0 kubenswrapper[26425]: I0217 15:36:52.942894 26425 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:52.943119 master-0 kubenswrapper[26425]: I0217 15:36:52.942940 26425 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:52.943256 master-0 kubenswrapper[26425]: I0217 15:36:52.943096 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:36:52.944027 master-0 kubenswrapper[26425]: I0217 15:36:52.943973 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs" (OuterVolumeSpecName: "kube-api-access-2t8cs") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "kube-api-access-2t8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:36:52.946848 master-0 kubenswrapper[26425]: I0217 15:36:52.946791 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:36:52.949590 master-0 kubenswrapper[26425]: I0217 15:36:52.949523 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "25188d19-3aa1-4346-8547-d571600db2f6" (UID: "25188d19-3aa1-4346-8547-d571600db2f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:36:53.045220 master-0 kubenswrapper[26425]: I0217 15:36:53.044968 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t8cs\" (UniqueName: \"kubernetes.io/projected/25188d19-3aa1-4346-8547-d571600db2f6-kube-api-access-2t8cs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:53.045220 master-0 kubenswrapper[26425]: I0217 15:36:53.045084 26425 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-console-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:53.045220 master-0 kubenswrapper[26425]: I0217 15:36:53.045117 26425 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:53.045220 master-0 kubenswrapper[26425]: I0217 15:36:53.045146 26425 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25188d19-3aa1-4346-8547-d571600db2f6-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:53.045220 master-0 kubenswrapper[26425]: I0217 15:36:53.045171 26425 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25188d19-3aa1-4346-8547-d571600db2f6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:36:53.446687 master-0 kubenswrapper[26425]: I0217 15:36:53.446612 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55495f9f9c-p58l5_25188d19-3aa1-4346-8547-d571600db2f6/console/0.log" Feb 17 15:36:53.447302 master-0 kubenswrapper[26425]: I0217 15:36:53.446695 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55495f9f9c-p58l5" event={"ID":"25188d19-3aa1-4346-8547-d571600db2f6","Type":"ContainerDied","Data":"c919f83e99626c37e5d712791608a69f58ea6e2cafe4520a3a46c722951734b6"} Feb 17 15:36:53.447302 master-0 kubenswrapper[26425]: I0217 15:36:53.446744 26425 scope.go:117] "RemoveContainer" containerID="62bc0a47ef7fb54261a0ebfba7d1d86c84145d8edec6583defa98ae636c4a32e" Feb 17 15:36:53.447302 master-0 kubenswrapper[26425]: I0217 15:36:53.446809 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55495f9f9c-p58l5" Feb 17 15:36:53.487148 master-0 kubenswrapper[26425]: I0217 15:36:53.487112 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:36:53.492294 master-0 kubenswrapper[26425]: I0217 15:36:53.492240 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-55495f9f9c-p58l5"] Feb 17 15:36:54.407922 master-0 kubenswrapper[26425]: I0217 15:36:54.407846 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25188d19-3aa1-4346-8547-d571600db2f6" path="/var/lib/kubelet/pods/25188d19-3aa1-4346-8547-d571600db2f6/volumes" Feb 17 15:37:40.877979 master-0 kubenswrapper[26425]: I0217 15:37:40.877871 26425 scope.go:117] "RemoveContainer" containerID="2e1ff511db2c69486a763112ab46f8b9eb94ac1ab354236201ab57c41c24770d" Feb 17 15:37:40.904631 master-0 kubenswrapper[26425]: I0217 15:37:40.904560 26425 scope.go:117] "RemoveContainer" containerID="83a7605533fa5b7aa413240443eee3c9aad88818eb25ab4aba4528a9db5327b6" Feb 17 15:37:40.935420 master-0 kubenswrapper[26425]: I0217 15:37:40.935364 26425 scope.go:117] "RemoveContainer" containerID="a250c04983f3b0106f36a27030f78302d8c17ec6de5b6e5cded32664184f0f6e" Feb 17 15:37:40.968848 master-0 kubenswrapper[26425]: I0217 15:37:40.967895 26425 scope.go:117] "RemoveContainer" containerID="a55d7f0507bd3d765056a8a318a8966408ed2fc8a1c30292db147835ef568009" Feb 17 15:37:46.155712 master-0 kubenswrapper[26425]: I0217 15:37:46.155614 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: E0217 15:37:46.156067 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156095 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: E0217 15:37:46.156128 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156139 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: E0217 15:37:46.156182 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" containerName="installer" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156195 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" containerName="installer" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156414 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34b86e7-e7af-492c-86d6-95fc9155d958" containerName="installer" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156449 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 17 15:37:46.156846 master-0 kubenswrapper[26425]: I0217 15:37:46.156505 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="25188d19-3aa1-4346-8547-d571600db2f6" containerName="console" Feb 17 15:37:46.157652 master-0 kubenswrapper[26425]: I0217 15:37:46.157198 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.160183 master-0 kubenswrapper[26425]: I0217 15:37:46.160104 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-qt5n5" Feb 17 15:37:46.161622 master-0 kubenswrapper[26425]: I0217 15:37:46.161586 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 17 15:37:46.171373 master-0 kubenswrapper[26425]: I0217 15:37:46.171245 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Feb 17 15:37:46.215601 master-0 kubenswrapper[26425]: I0217 15:37:46.215524 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.215836 master-0 kubenswrapper[26425]: I0217 15:37:46.215790 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.317520 master-0 kubenswrapper[26425]: I0217 15:37:46.317413 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.317762 master-0 kubenswrapper[26425]: I0217 15:37:46.317596 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.317762 master-0 kubenswrapper[26425]: I0217 15:37:46.317723 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.340696 master-0 kubenswrapper[26425]: I0217 15:37:46.340634 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:46.499028 master-0 kubenswrapper[26425]: I0217 15:37:46.498866 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:47.009292 master-0 kubenswrapper[26425]: I0217 15:37:47.009107 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Feb 17 15:37:47.012135 master-0 kubenswrapper[26425]: W0217 15:37:47.012052 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1206b1ca_8aa0_4fda_947a_31f6d9064c0d.slice/crio-5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300 WatchSource:0}: Error finding container 5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300: Status 404 returned error can't find the container with id 5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300 Feb 17 15:37:48.024604 master-0 kubenswrapper[26425]: I0217 15:37:48.024511 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"1206b1ca-8aa0-4fda-947a-31f6d9064c0d","Type":"ContainerStarted","Data":"eba01ce31c1834fc7d470af2bab7dc941368425d1e50dd351b18806f6ecb1771"} Feb 17 15:37:48.024604 master-0 kubenswrapper[26425]: I0217 15:37:48.024608 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"1206b1ca-8aa0-4fda-947a-31f6d9064c0d","Type":"ContainerStarted","Data":"5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300"} Feb 17 15:37:48.054518 master-0 kubenswrapper[26425]: I0217 15:37:48.054368 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=2.054339576 podStartE2EDuration="2.054339576s" podCreationTimestamp="2026-02-17 15:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:37:48.047533781 +0000 UTC m=+1329.939257689" watchObservedRunningTime="2026-02-17 15:37:48.054339576 +0000 UTC m=+1329.946063424" Feb 17 15:37:49.049245 master-0 kubenswrapper[26425]: I0217 15:37:49.049097 26425 generic.go:334] "Generic (PLEG): container finished" podID="1206b1ca-8aa0-4fda-947a-31f6d9064c0d" containerID="eba01ce31c1834fc7d470af2bab7dc941368425d1e50dd351b18806f6ecb1771" exitCode=0 Feb 17 15:37:49.049245 master-0 kubenswrapper[26425]: I0217 15:37:49.049169 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"1206b1ca-8aa0-4fda-947a-31f6d9064c0d","Type":"ContainerDied","Data":"eba01ce31c1834fc7d470af2bab7dc941368425d1e50dd351b18806f6ecb1771"} Feb 17 15:37:50.539574 master-0 kubenswrapper[26425]: I0217 15:37:50.539443 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:50.609668 master-0 kubenswrapper[26425]: I0217 15:37:50.609570 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access\") pod \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " Feb 17 15:37:50.610042 master-0 kubenswrapper[26425]: I0217 15:37:50.609742 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir\") pod \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\" (UID: \"1206b1ca-8aa0-4fda-947a-31f6d9064c0d\") " Feb 17 15:37:50.610333 master-0 kubenswrapper[26425]: I0217 15:37:50.610279 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1206b1ca-8aa0-4fda-947a-31f6d9064c0d" (UID: "1206b1ca-8aa0-4fda-947a-31f6d9064c0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:37:50.613258 master-0 kubenswrapper[26425]: I0217 15:37:50.613142 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1206b1ca-8aa0-4fda-947a-31f6d9064c0d" (UID: "1206b1ca-8aa0-4fda-947a-31f6d9064c0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:37:50.712598 master-0 kubenswrapper[26425]: I0217 15:37:50.712506 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:37:50.712598 master-0 kubenswrapper[26425]: I0217 15:37:50.712572 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1206b1ca-8aa0-4fda-947a-31f6d9064c0d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:37:51.086943 master-0 kubenswrapper[26425]: I0217 15:37:51.086869 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"1206b1ca-8aa0-4fda-947a-31f6d9064c0d","Type":"ContainerDied","Data":"5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300"} Feb 17 15:37:51.086943 master-0 kubenswrapper[26425]: I0217 15:37:51.086914 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5949f869bbedf1fefe065d108c99da7dd2341ee35f04d39af21e5002fdc25300" Feb 17 15:37:51.086943 master-0 kubenswrapper[26425]: I0217 15:37:51.086926 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Feb 17 15:37:58.055545 master-0 kubenswrapper[26425]: I0217 15:37:58.055424 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Feb 17 15:37:58.056714 master-0 kubenswrapper[26425]: E0217 15:37:58.056141 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1206b1ca-8aa0-4fda-947a-31f6d9064c0d" containerName="pruner" Feb 17 15:37:58.056714 master-0 kubenswrapper[26425]: I0217 15:37:58.056176 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1206b1ca-8aa0-4fda-947a-31f6d9064c0d" containerName="pruner" Feb 17 15:37:58.056714 master-0 kubenswrapper[26425]: I0217 15:37:58.056448 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1206b1ca-8aa0-4fda-947a-31f6d9064c0d" containerName="pruner" Feb 17 15:37:58.057378 master-0 kubenswrapper[26425]: I0217 15:37:58.057327 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.062962 master-0 kubenswrapper[26425]: I0217 15:37:58.062896 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-crrn4" Feb 17 15:37:58.063313 master-0 kubenswrapper[26425]: I0217 15:37:58.063150 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:37:58.079685 master-0 kubenswrapper[26425]: I0217 15:37:58.079623 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Feb 17 15:37:58.152228 master-0 kubenswrapper[26425]: I0217 15:37:58.152123 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.152552 master-0 kubenswrapper[26425]: I0217 15:37:58.152252 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.152552 master-0 kubenswrapper[26425]: I0217 15:37:58.152304 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.253940 master-0 kubenswrapper[26425]: I0217 15:37:58.253841 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.254518 master-0 kubenswrapper[26425]: I0217 15:37:58.254023 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.254518 master-0 kubenswrapper[26425]: I0217 15:37:58.254167 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.254745 master-0 kubenswrapper[26425]: I0217 15:37:58.254531 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.254818 master-0 kubenswrapper[26425]: I0217 15:37:58.254534 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.284882 master-0 kubenswrapper[26425]: I0217 15:37:58.284790 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access\") pod \"installer-6-master-0\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.381842 master-0 kubenswrapper[26425]: I0217 15:37:58.381762 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:37:58.870191 master-0 kubenswrapper[26425]: W0217 15:37:58.870111 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod494aed14_6462_4972_94e0_87a665108366.slice/crio-ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63 WatchSource:0}: Error finding container ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63: Status 404 returned error can't find the container with id ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63 Feb 17 15:37:58.871192 master-0 kubenswrapper[26425]: I0217 15:37:58.870946 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Feb 17 15:37:59.162199 master-0 kubenswrapper[26425]: I0217 15:37:59.162135 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"494aed14-6462-4972-94e0-87a665108366","Type":"ContainerStarted","Data":"ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63"} Feb 17 15:38:00.174847 master-0 kubenswrapper[26425]: I0217 15:38:00.174771 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"494aed14-6462-4972-94e0-87a665108366","Type":"ContainerStarted","Data":"c8e792ad28706c564c1017dd3d549ed9bef232927d9df0a95f3100bc3b8809a3"} Feb 17 15:38:00.206086 master-0 kubenswrapper[26425]: I0217 15:38:00.205974 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-0" podStartSLOduration=2.205946472 podStartE2EDuration="2.205946472s" podCreationTimestamp="2026-02-17 15:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:38:00.197026515 +0000 UTC m=+1342.088750403" watchObservedRunningTime="2026-02-17 15:38:00.205946472 +0000 UTC m=+1342.097670320" Feb 17 15:38:12.924088 master-0 kubenswrapper[26425]: I0217 15:38:12.923972 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:38:12.924908 master-0 kubenswrapper[26425]: E0217 15:38:12.924212 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:38:12.924908 master-0 kubenswrapper[26425]: E0217 15:38:12.924259 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:38:12.924908 master-0 kubenswrapper[26425]: E0217 15:38:12.924331 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:40:14.924309178 +0000 UTC m=+1476.816033006 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:38:18.271563 master-0 kubenswrapper[26425]: I0217 15:38:18.271422 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:38:18.273592 master-0 kubenswrapper[26425]: I0217 15:38:18.273545 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.277421 master-0 kubenswrapper[26425]: I0217 15:38:18.276877 26425 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 17 15:38:18.277421 master-0 kubenswrapper[26425]: I0217 15:38:18.277113 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 17 15:38:18.277421 master-0 kubenswrapper[26425]: I0217 15:38:18.277287 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 17 15:38:18.278526 master-0 kubenswrapper[26425]: I0217 15:38:18.278433 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 17 15:38:18.280450 master-0 kubenswrapper[26425]: I0217 15:38:18.280383 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:38:18.387405 master-0 kubenswrapper[26425]: I0217 15:38:18.387299 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.387791 master-0 kubenswrapper[26425]: I0217 15:38:18.387426 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7r4p\" (UniqueName: \"kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.387791 master-0 kubenswrapper[26425]: I0217 15:38:18.387582 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.488562 master-0 kubenswrapper[26425]: I0217 15:38:18.488410 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.489000 master-0 kubenswrapper[26425]: I0217 15:38:18.488953 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7r4p\" (UniqueName: \"kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.489119 master-0 kubenswrapper[26425]: I0217 15:38:18.489008 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.491953 master-0 kubenswrapper[26425]: I0217 15:38:18.491899 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.494335 master-0 kubenswrapper[26425]: I0217 15:38:18.494247 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.514738 master-0 kubenswrapper[26425]: I0217 15:38:18.514664 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7r4p\" (UniqueName: \"kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p\") pod \"sushy-emulator-58f4c9b998-jd8tg\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:18.637798 master-0 kubenswrapper[26425]: I0217 15:38:18.637722 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:19.152727 master-0 kubenswrapper[26425]: I0217 15:38:19.152660 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:38:19.159597 master-0 kubenswrapper[26425]: W0217 15:38:19.158523 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9f9821d_1712_454c_abbd_e2d26852d4d7.slice/crio-d0af90dad1326f45880eb7ed324726a7a82bffd3a55af0a137b1fd77ed8eb03e WatchSource:0}: Error finding container d0af90dad1326f45880eb7ed324726a7a82bffd3a55af0a137b1fd77ed8eb03e: Status 404 returned error can't find the container with id d0af90dad1326f45880eb7ed324726a7a82bffd3a55af0a137b1fd77ed8eb03e Feb 17 15:38:19.162700 master-0 kubenswrapper[26425]: I0217 15:38:19.162636 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:38:19.382290 master-0 kubenswrapper[26425]: I0217 15:38:19.382202 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" event={"ID":"e9f9821d-1712-454c-abbd-e2d26852d4d7","Type":"ContainerStarted","Data":"d0af90dad1326f45880eb7ed324726a7a82bffd3a55af0a137b1fd77ed8eb03e"} Feb 17 15:38:27.453765 master-0 kubenswrapper[26425]: I0217 15:38:27.453696 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" event={"ID":"e9f9821d-1712-454c-abbd-e2d26852d4d7","Type":"ContainerStarted","Data":"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731"} Feb 17 15:38:27.482823 master-0 kubenswrapper[26425]: I0217 15:38:27.482445 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" podStartSLOduration=1.595388418 podStartE2EDuration="9.482419145s" podCreationTimestamp="2026-02-17 15:38:18 +0000 UTC" firstStartedPulling="2026-02-17 15:38:19.162595711 +0000 UTC m=+1361.054319539" lastFinishedPulling="2026-02-17 15:38:27.049626408 +0000 UTC m=+1368.941350266" observedRunningTime="2026-02-17 15:38:27.475576189 +0000 UTC m=+1369.367300017" watchObservedRunningTime="2026-02-17 15:38:27.482419145 +0000 UTC m=+1369.374142993" Feb 17 15:38:28.638848 master-0 kubenswrapper[26425]: I0217 15:38:28.638746 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:28.638848 master-0 kubenswrapper[26425]: I0217 15:38:28.638846 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:28.654968 master-0 kubenswrapper[26425]: I0217 15:38:28.654876 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:29.494016 master-0 kubenswrapper[26425]: I0217 15:38:29.493920 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:38:32.588796 master-0 kubenswrapper[26425]: I0217 15:38:32.588728 26425 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:38:32.589745 master-0 kubenswrapper[26425]: I0217 15:38:32.589218 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="cluster-policy-controller" containerID="cri-o://7bd7a427fdfea568f9e25f8ac1dfa94717d2fe4a7b16f61327856994d3fecf37" gracePeriod=30 Feb 17 15:38:32.590731 master-0 kubenswrapper[26425]: I0217 15:38:32.589393 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://e9aecde5e6438f850dbad5ae273e3c99bc8982f855499ceec4aa52f9bb199b51" gracePeriod=30 Feb 17 15:38:32.590917 master-0 kubenswrapper[26425]: I0217 15:38:32.589381 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" containerID="cri-o://a1bf1a7e1900bf2718fe7ec35df9cdfd995d49924e5c050fc18a197ec60d89c3" gracePeriod=30 Feb 17 15:38:32.591043 master-0 kubenswrapper[26425]: I0217 15:38:32.589360 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://d7c12fb1b92d28ef7ba81926d7b090d49d50669135d83d19da43eab3563fbe49" gracePeriod=30 Feb 17 15:38:32.591094 master-0 kubenswrapper[26425]: I0217 15:38:32.590871 26425 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:38:32.591666 master-0 kubenswrapper[26425]: E0217 15:38:32.591625 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.591735 master-0 kubenswrapper[26425]: I0217 15:38:32.591673 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.591735 master-0 kubenswrapper[26425]: E0217 15:38:32.591715 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="cluster-policy-controller" Feb 17 15:38:32.591832 master-0 kubenswrapper[26425]: I0217 15:38:32.591735 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="cluster-policy-controller" Feb 17 15:38:32.591832 master-0 kubenswrapper[26425]: E0217 15:38:32.591806 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-cert-syncer" Feb 17 15:38:32.591832 master-0 kubenswrapper[26425]: I0217 15:38:32.591826 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-cert-syncer" Feb 17 15:38:32.592075 master-0 kubenswrapper[26425]: E0217 15:38:32.591856 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-recovery-controller" Feb 17 15:38:32.592075 master-0 kubenswrapper[26425]: I0217 15:38:32.591875 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-recovery-controller" Feb 17 15:38:32.592272 master-0 kubenswrapper[26425]: I0217 15:38:32.592168 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-cert-syncer" Feb 17 15:38:32.592272 master-0 kubenswrapper[26425]: I0217 15:38:32.592202 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.592272 master-0 kubenswrapper[26425]: I0217 15:38:32.592259 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager-recovery-controller" Feb 17 15:38:32.592394 master-0 kubenswrapper[26425]: I0217 15:38:32.592294 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="cluster-policy-controller" Feb 17 15:38:32.592681 master-0 kubenswrapper[26425]: E0217 15:38:32.592647 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.592748 master-0 kubenswrapper[26425]: I0217 15:38:32.592687 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.592983 master-0 kubenswrapper[26425]: I0217 15:38:32.592959 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerName="kube-controller-manager" Feb 17 15:38:32.645399 master-0 kubenswrapper[26425]: I0217 15:38:32.645312 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.645619 master-0 kubenswrapper[26425]: I0217 15:38:32.645450 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.746771 master-0 kubenswrapper[26425]: I0217 15:38:32.746696 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.747031 master-0 kubenswrapper[26425]: I0217 15:38:32.746801 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.747031 master-0 kubenswrapper[26425]: I0217 15:38:32.746830 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.747316 master-0 kubenswrapper[26425]: I0217 15:38:32.747161 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a0ae6169bc93c3de05f445421dfcd004-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a0ae6169bc93c3de05f445421dfcd004\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.872151 master-0 kubenswrapper[26425]: I0217 15:38:32.872046 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager-cert-syncer/0.log" Feb 17 15:38:32.873576 master-0 kubenswrapper[26425]: I0217 15:38:32.873525 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager/0.log" Feb 17 15:38:32.873708 master-0 kubenswrapper[26425]: I0217 15:38:32.873662 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:32.877888 master-0 kubenswrapper[26425]: I0217 15:38:32.877812 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="eaff449c5bcc0e8cb13ed26ccbcdd311" podUID="a0ae6169bc93c3de05f445421dfcd004" Feb 17 15:38:32.949990 master-0 kubenswrapper[26425]: I0217 15:38:32.949942 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir\") pod \"eaff449c5bcc0e8cb13ed26ccbcdd311\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " Feb 17 15:38:32.950400 master-0 kubenswrapper[26425]: I0217 15:38:32.950093 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eaff449c5bcc0e8cb13ed26ccbcdd311" (UID: "eaff449c5bcc0e8cb13ed26ccbcdd311"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:38:32.950531 master-0 kubenswrapper[26425]: I0217 15:38:32.950359 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir\") pod \"eaff449c5bcc0e8cb13ed26ccbcdd311\" (UID: \"eaff449c5bcc0e8cb13ed26ccbcdd311\") " Feb 17 15:38:32.950682 master-0 kubenswrapper[26425]: I0217 15:38:32.950651 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eaff449c5bcc0e8cb13ed26ccbcdd311" (UID: "eaff449c5bcc0e8cb13ed26ccbcdd311"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:38:32.951341 master-0 kubenswrapper[26425]: I0217 15:38:32.951286 26425 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:38:32.951451 master-0 kubenswrapper[26425]: I0217 15:38:32.951351 26425 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eaff449c5bcc0e8cb13ed26ccbcdd311-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:38:33.536654 master-0 kubenswrapper[26425]: I0217 15:38:33.536553 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager-cert-syncer/0.log" Feb 17 15:38:33.538689 master-0 kubenswrapper[26425]: I0217 15:38:33.538637 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager/0.log" Feb 17 15:38:33.538814 master-0 kubenswrapper[26425]: I0217 15:38:33.538763 26425 generic.go:334] "Generic (PLEG): container finished" podID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerID="a1bf1a7e1900bf2718fe7ec35df9cdfd995d49924e5c050fc18a197ec60d89c3" exitCode=0 Feb 17 15:38:33.538814 master-0 kubenswrapper[26425]: I0217 15:38:33.538798 26425 generic.go:334] "Generic (PLEG): container finished" podID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerID="d7c12fb1b92d28ef7ba81926d7b090d49d50669135d83d19da43eab3563fbe49" exitCode=0 Feb 17 15:38:33.538814 master-0 kubenswrapper[26425]: I0217 15:38:33.538814 26425 generic.go:334] "Generic (PLEG): container finished" podID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerID="e9aecde5e6438f850dbad5ae273e3c99bc8982f855499ceec4aa52f9bb199b51" exitCode=2 Feb 17 15:38:33.538995 master-0 kubenswrapper[26425]: I0217 15:38:33.538831 26425 generic.go:334] "Generic (PLEG): container finished" podID="eaff449c5bcc0e8cb13ed26ccbcdd311" containerID="7bd7a427fdfea568f9e25f8ac1dfa94717d2fe4a7b16f61327856994d3fecf37" exitCode=0 Feb 17 15:38:33.538995 master-0 kubenswrapper[26425]: I0217 15:38:33.538911 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:33.538995 master-0 kubenswrapper[26425]: I0217 15:38:33.538943 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b777f11905a08a05c666a26ddcdbc52049719c010842fcb74dcbf097b130693" Feb 17 15:38:33.538995 master-0 kubenswrapper[26425]: I0217 15:38:33.538990 26425 scope.go:117] "RemoveContainer" containerID="57141bfc1a0a1d8e52afad3e9b378c7a4dd9c37db878ece93dd489f7a847dcce" Feb 17 15:38:33.542685 master-0 kubenswrapper[26425]: I0217 15:38:33.542327 26425 generic.go:334] "Generic (PLEG): container finished" podID="494aed14-6462-4972-94e0-87a665108366" containerID="c8e792ad28706c564c1017dd3d549ed9bef232927d9df0a95f3100bc3b8809a3" exitCode=0 Feb 17 15:38:33.542685 master-0 kubenswrapper[26425]: I0217 15:38:33.542388 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"494aed14-6462-4972-94e0-87a665108366","Type":"ContainerDied","Data":"c8e792ad28706c564c1017dd3d549ed9bef232927d9df0a95f3100bc3b8809a3"} Feb 17 15:38:33.543628 master-0 kubenswrapper[26425]: I0217 15:38:33.543571 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="eaff449c5bcc0e8cb13ed26ccbcdd311" podUID="a0ae6169bc93c3de05f445421dfcd004" Feb 17 15:38:33.611816 master-0 kubenswrapper[26425]: I0217 15:38:33.611745 26425 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="eaff449c5bcc0e8cb13ed26ccbcdd311" podUID="a0ae6169bc93c3de05f445421dfcd004" Feb 17 15:38:34.410823 master-0 kubenswrapper[26425]: I0217 15:38:34.410715 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaff449c5bcc0e8cb13ed26ccbcdd311" path="/var/lib/kubelet/pods/eaff449c5bcc0e8cb13ed26ccbcdd311/volumes" Feb 17 15:38:34.558051 master-0 kubenswrapper[26425]: I0217 15:38:34.557932 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_eaff449c5bcc0e8cb13ed26ccbcdd311/kube-controller-manager-cert-syncer/0.log" Feb 17 15:38:35.046761 master-0 kubenswrapper[26425]: I0217 15:38:35.046684 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:38:35.090667 master-0 kubenswrapper[26425]: I0217 15:38:35.090620 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir\") pod \"494aed14-6462-4972-94e0-87a665108366\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " Feb 17 15:38:35.091006 master-0 kubenswrapper[26425]: I0217 15:38:35.090769 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "494aed14-6462-4972-94e0-87a665108366" (UID: "494aed14-6462-4972-94e0-87a665108366"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:38:35.091089 master-0 kubenswrapper[26425]: I0217 15:38:35.091073 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock\") pod \"494aed14-6462-4972-94e0-87a665108366\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " Feb 17 15:38:35.091279 master-0 kubenswrapper[26425]: I0217 15:38:35.091097 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock" (OuterVolumeSpecName: "var-lock") pod "494aed14-6462-4972-94e0-87a665108366" (UID: "494aed14-6462-4972-94e0-87a665108366"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:38:35.091369 master-0 kubenswrapper[26425]: I0217 15:38:35.091264 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access\") pod \"494aed14-6462-4972-94e0-87a665108366\" (UID: \"494aed14-6462-4972-94e0-87a665108366\") " Feb 17 15:38:35.091910 master-0 kubenswrapper[26425]: I0217 15:38:35.091890 26425 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 17 15:38:35.092019 master-0 kubenswrapper[26425]: I0217 15:38:35.092004 26425 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/494aed14-6462-4972-94e0-87a665108366-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:38:35.094343 master-0 kubenswrapper[26425]: I0217 15:38:35.094296 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "494aed14-6462-4972-94e0-87a665108366" (UID: "494aed14-6462-4972-94e0-87a665108366"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:38:35.193831 master-0 kubenswrapper[26425]: I0217 15:38:35.193747 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/494aed14-6462-4972-94e0-87a665108366-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 17 15:38:35.570544 master-0 kubenswrapper[26425]: I0217 15:38:35.570427 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"494aed14-6462-4972-94e0-87a665108366","Type":"ContainerDied","Data":"ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63"} Feb 17 15:38:35.570544 master-0 kubenswrapper[26425]: I0217 15:38:35.570544 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5665e67457f29ecd27df96870ff2ee0948ebe88e07f33ad16e7a39f77a8b63" Feb 17 15:38:35.570954 master-0 kubenswrapper[26425]: I0217 15:38:35.570543 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Feb 17 15:38:46.395374 master-0 kubenswrapper[26425]: I0217 15:38:46.395266 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:46.420505 master-0 kubenswrapper[26425]: I0217 15:38:46.420414 26425 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ee61db88-ecfc-4521-89a5-170cc137340b" Feb 17 15:38:46.420505 master-0 kubenswrapper[26425]: I0217 15:38:46.420505 26425 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ee61db88-ecfc-4521-89a5-170cc137340b" Feb 17 15:38:46.440107 master-0 kubenswrapper[26425]: I0217 15:38:46.439227 26425 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:46.447133 master-0 kubenswrapper[26425]: I0217 15:38:46.447017 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:38:46.459493 master-0 kubenswrapper[26425]: I0217 15:38:46.459251 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:46.466909 master-0 kubenswrapper[26425]: I0217 15:38:46.462973 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:38:46.470528 master-0 kubenswrapper[26425]: I0217 15:38:46.469924 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 17 15:38:46.502792 master-0 kubenswrapper[26425]: W0217 15:38:46.502731 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ae6169bc93c3de05f445421dfcd004.slice/crio-7b6379f6ce5e57c4a2a44d6c0e6b7a82c6b08b083195527039c179810f97cd45 WatchSource:0}: Error finding container 7b6379f6ce5e57c4a2a44d6c0e6b7a82c6b08b083195527039c179810f97cd45: Status 404 returned error can't find the container with id 7b6379f6ce5e57c4a2a44d6c0e6b7a82c6b08b083195527039c179810f97cd45 Feb 17 15:38:46.681319 master-0 kubenswrapper[26425]: I0217 15:38:46.681152 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a0ae6169bc93c3de05f445421dfcd004","Type":"ContainerStarted","Data":"7b6379f6ce5e57c4a2a44d6c0e6b7a82c6b08b083195527039c179810f97cd45"} Feb 17 15:38:47.711170 master-0 kubenswrapper[26425]: I0217 15:38:47.711051 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a0ae6169bc93c3de05f445421dfcd004","Type":"ContainerStarted","Data":"a916eacc28a93ff00c63ec6edd15e3998097b407705b8b07464bbd2d1e0a3da9"} Feb 17 15:38:47.711170 master-0 kubenswrapper[26425]: I0217 15:38:47.711160 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a0ae6169bc93c3de05f445421dfcd004","Type":"ContainerStarted","Data":"2faf31c3a52af84ace6fc6512dad50adf27023fc252641832207b48449788002"} Feb 17 15:38:47.712027 master-0 kubenswrapper[26425]: I0217 15:38:47.711184 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a0ae6169bc93c3de05f445421dfcd004","Type":"ContainerStarted","Data":"fce94ca4ceaeedf37760468522a666ed589c54458980a0d318df5799c7bce1e8"} Feb 17 15:38:48.726094 master-0 kubenswrapper[26425]: I0217 15:38:48.725981 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a0ae6169bc93c3de05f445421dfcd004","Type":"ContainerStarted","Data":"5aae68134fddfe53e0398bfffce91b9fd98a47769181725cf35fb5358607b8d4"} Feb 17 15:38:48.768054 master-0 kubenswrapper[26425]: I0217 15:38:48.767926 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.767898448 podStartE2EDuration="2.767898448s" podCreationTimestamp="2026-02-17 15:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:38:48.754514754 +0000 UTC m=+1390.646238652" watchObservedRunningTime="2026-02-17 15:38:48.767898448 +0000 UTC m=+1390.659622306" Feb 17 15:38:56.461436 master-0 kubenswrapper[26425]: I0217 15:38:56.461350 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.462591 master-0 kubenswrapper[26425]: I0217 15:38:56.461448 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.462591 master-0 kubenswrapper[26425]: I0217 15:38:56.461513 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.462591 master-0 kubenswrapper[26425]: I0217 15:38:56.462010 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.467572 master-0 kubenswrapper[26425]: I0217 15:38:56.467503 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.470980 master-0 kubenswrapper[26425]: I0217 15:38:56.470918 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:56.829410 master-0 kubenswrapper[26425]: I0217 15:38:56.829245 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:38:57.838723 master-0 kubenswrapper[26425]: I0217 15:38:57.838615 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 17 15:39:05.753958 master-0 kubenswrapper[26425]: I0217 15:39:05.753858 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c"] Feb 17 15:39:05.754692 master-0 kubenswrapper[26425]: E0217 15:39:05.754362 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494aed14-6462-4972-94e0-87a665108366" containerName="installer" Feb 17 15:39:05.754692 master-0 kubenswrapper[26425]: I0217 15:39:05.754385 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="494aed14-6462-4972-94e0-87a665108366" containerName="installer" Feb 17 15:39:05.754890 master-0 kubenswrapper[26425]: I0217 15:39:05.754854 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="494aed14-6462-4972-94e0-87a665108366" containerName="installer" Feb 17 15:39:05.755994 master-0 kubenswrapper[26425]: I0217 15:39:05.755960 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:05.819969 master-0 kubenswrapper[26425]: I0217 15:39:05.819899 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c"] Feb 17 15:39:05.879858 master-0 kubenswrapper[26425]: I0217 15:39:05.879764 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lqx\" (UniqueName: \"kubernetes.io/projected/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-kube-api-access-g5lqx\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:05.879858 master-0 kubenswrapper[26425]: I0217 15:39:05.879843 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-os-client-config\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:05.981631 master-0 kubenswrapper[26425]: I0217 15:39:05.981540 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lqx\" (UniqueName: \"kubernetes.io/projected/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-kube-api-access-g5lqx\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:05.981857 master-0 kubenswrapper[26425]: I0217 15:39:05.981608 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-os-client-config\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:05.985623 master-0 kubenswrapper[26425]: I0217 15:39:05.985581 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-os-client-config\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:06.006790 master-0 kubenswrapper[26425]: I0217 15:39:06.006711 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lqx\" (UniqueName: \"kubernetes.io/projected/63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03-kube-api-access-g5lqx\") pod \"nova-console-poller-76bf7fdbf7-kfl2c\" (UID: \"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03\") " pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:06.072431 master-0 kubenswrapper[26425]: I0217 15:39:06.072359 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" Feb 17 15:39:06.569313 master-0 kubenswrapper[26425]: W0217 15:39:06.569225 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a0ee0f_2b1a_4b4c_acf1_0d16a4bbaf03.slice/crio-e112989f18d4aa60592be4cf124819ba72ec10d35c0f17f9e6d38d1a0cbef41c WatchSource:0}: Error finding container e112989f18d4aa60592be4cf124819ba72ec10d35c0f17f9e6d38d1a0cbef41c: Status 404 returned error can't find the container with id e112989f18d4aa60592be4cf124819ba72ec10d35c0f17f9e6d38d1a0cbef41c Feb 17 15:39:06.570950 master-0 kubenswrapper[26425]: I0217 15:39:06.570886 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c"] Feb 17 15:39:06.925375 master-0 kubenswrapper[26425]: I0217 15:39:06.925286 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" event={"ID":"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03","Type":"ContainerStarted","Data":"e112989f18d4aa60592be4cf124819ba72ec10d35c0f17f9e6d38d1a0cbef41c"} Feb 17 15:39:12.978705 master-0 kubenswrapper[26425]: I0217 15:39:12.978622 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" event={"ID":"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03","Type":"ContainerStarted","Data":"46a5b5b57612ef4dd5bf68706358fd8600f155a388bde19a1a398d8ca73855be"} Feb 17 15:39:12.978705 master-0 kubenswrapper[26425]: I0217 15:39:12.978701 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" event={"ID":"63a0ee0f-2b1a-4b4c-acf1-0d16a4bbaf03","Type":"ContainerStarted","Data":"11ae247f36e81aa332b09e7a3fc6207e08d66c4d9f6917923529a9eef186a624"} Feb 17 15:39:13.010212 master-0 kubenswrapper[26425]: I0217 15:39:13.010108 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c" podStartSLOduration=2.081734285 podStartE2EDuration="8.010083374s" podCreationTimestamp="2026-02-17 15:39:05 +0000 UTC" firstStartedPulling="2026-02-17 15:39:06.573587503 +0000 UTC m=+1408.465311331" lastFinishedPulling="2026-02-17 15:39:12.501936592 +0000 UTC m=+1414.393660420" observedRunningTime="2026-02-17 15:39:13.004443168 +0000 UTC m=+1414.896167076" watchObservedRunningTime="2026-02-17 15:39:13.010083374 +0000 UTC m=+1414.901807232" Feb 17 15:39:41.400313 master-0 kubenswrapper[26425]: I0217 15:39:41.400236 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v"] Feb 17 15:39:41.403659 master-0 kubenswrapper[26425]: I0217 15:39:41.403565 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.424969 master-0 kubenswrapper[26425]: I0217 15:39:41.424746 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v"] Feb 17 15:39:41.533296 master-0 kubenswrapper[26425]: I0217 15:39:41.533209 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr9r\" (UniqueName: \"kubernetes.io/projected/e2c62d9d-db54-4242-a327-c62595efa4ae-kube-api-access-flr9r\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.533558 master-0 kubenswrapper[26425]: I0217 15:39:41.533328 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2c62d9d-db54-4242-a327-c62595efa4ae-os-client-config\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.533616 master-0 kubenswrapper[26425]: I0217 15:39:41.533578 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/e2c62d9d-db54-4242-a327-c62595efa4ae-nova-console-recordings-pv\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.635399 master-0 kubenswrapper[26425]: I0217 15:39:41.635314 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/e2c62d9d-db54-4242-a327-c62595efa4ae-nova-console-recordings-pv\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.635678 master-0 kubenswrapper[26425]: I0217 15:39:41.635450 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flr9r\" (UniqueName: \"kubernetes.io/projected/e2c62d9d-db54-4242-a327-c62595efa4ae-kube-api-access-flr9r\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.635678 master-0 kubenswrapper[26425]: I0217 15:39:41.635565 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2c62d9d-db54-4242-a327-c62595efa4ae-os-client-config\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.641282 master-0 kubenswrapper[26425]: I0217 15:39:41.641210 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2c62d9d-db54-4242-a327-c62595efa4ae-os-client-config\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:41.663925 master-0 kubenswrapper[26425]: I0217 15:39:41.663656 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flr9r\" (UniqueName: \"kubernetes.io/projected/e2c62d9d-db54-4242-a327-c62595efa4ae-kube-api-access-flr9r\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:42.300698 master-0 kubenswrapper[26425]: I0217 15:39:42.300627 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/e2c62d9d-db54-4242-a327-c62595efa4ae-nova-console-recordings-pv\") pod \"nova-console-recorder-7ccbcf9885-b7b8v\" (UID: \"e2c62d9d-db54-4242-a327-c62595efa4ae\") " pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:42.329552 master-0 kubenswrapper[26425]: I0217 15:39:42.329450 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" Feb 17 15:39:42.766987 master-0 kubenswrapper[26425]: I0217 15:39:42.766903 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v"] Feb 17 15:39:43.279331 master-0 kubenswrapper[26425]: I0217 15:39:43.279187 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" event={"ID":"e2c62d9d-db54-4242-a327-c62595efa4ae","Type":"ContainerStarted","Data":"28384ee670c168998698bd2199e6f8a70056bfc30f46e26fb49043eef14dc8de"} Feb 17 15:39:56.435539 master-0 kubenswrapper[26425]: I0217 15:39:56.435410 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" event={"ID":"e2c62d9d-db54-4242-a327-c62595efa4ae","Type":"ContainerStarted","Data":"a0b56dc640876bcd9c09e59881b6d5384621596597e285dca39eae7488751a5c"} Feb 17 15:39:57.448004 master-0 kubenswrapper[26425]: I0217 15:39:57.447922 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" event={"ID":"e2c62d9d-db54-4242-a327-c62595efa4ae","Type":"ContainerStarted","Data":"7790f83c2cc7da815229aa9182d156ca6e161cf60b01b2ff723acdb617da38f0"} Feb 17 15:40:03.085831 master-0 kubenswrapper[26425]: I0217 15:40:03.085711 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v" podStartSLOduration=8.232452914 podStartE2EDuration="22.085686778s" podCreationTimestamp="2026-02-17 15:39:41 +0000 UTC" firstStartedPulling="2026-02-17 15:39:42.76661802 +0000 UTC m=+1444.658341848" lastFinishedPulling="2026-02-17 15:39:56.619851854 +0000 UTC m=+1458.511575712" observedRunningTime="2026-02-17 15:40:03.077317125 +0000 UTC m=+1464.969040983" watchObservedRunningTime="2026-02-17 15:40:03.085686778 +0000 UTC m=+1464.977410636" Feb 17 15:40:14.985028 master-0 kubenswrapper[26425]: I0217 15:40:14.984910 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:40:14.998549 master-0 kubenswrapper[26425]: E0217 15:40:14.985248 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:40:14.998549 master-0 kubenswrapper[26425]: E0217 15:40:14.985308 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:40:14.998549 master-0 kubenswrapper[26425]: E0217 15:40:14.985399 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:42:16.985368499 +0000 UTC m=+1598.877092347 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:40:28.449920 master-0 kubenswrapper[26425]: I0217 15:40:28.449859 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx"] Feb 17 15:40:28.452116 master-0 kubenswrapper[26425]: I0217 15:40:28.452077 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.473952 master-0 kubenswrapper[26425]: I0217 15:40:28.473913 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx"] Feb 17 15:40:28.528247 master-0 kubenswrapper[26425]: I0217 15:40:28.528171 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.528507 master-0 kubenswrapper[26425]: I0217 15:40:28.528356 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.528507 master-0 kubenswrapper[26425]: I0217 15:40:28.528399 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p6sr\" (UniqueName: \"kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.629965 master-0 kubenswrapper[26425]: I0217 15:40:28.629844 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.630325 master-0 kubenswrapper[26425]: I0217 15:40:28.629998 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.630325 master-0 kubenswrapper[26425]: I0217 15:40:28.630028 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p6sr\" (UniqueName: \"kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.631061 master-0 kubenswrapper[26425]: I0217 15:40:28.630988 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.631314 master-0 kubenswrapper[26425]: I0217 15:40:28.631195 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.663317 master-0 kubenswrapper[26425]: I0217 15:40:28.663263 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p6sr\" (UniqueName: \"kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:28.776639 master-0 kubenswrapper[26425]: I0217 15:40:28.776481 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:29.251697 master-0 kubenswrapper[26425]: I0217 15:40:29.251600 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx"] Feb 17 15:40:29.252053 master-0 kubenswrapper[26425]: W0217 15:40:29.251708 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8043042d_31e1_4f30_9bf1_41314d203bb9.slice/crio-bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f WatchSource:0}: Error finding container bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f: Status 404 returned error can't find the container with id bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f Feb 17 15:40:29.798433 master-0 kubenswrapper[26425]: I0217 15:40:29.798298 26425 generic.go:334] "Generic (PLEG): container finished" podID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerID="f08f1f37f5d4bc663f0c4af463c8b97ac7e12636cc49fbf470e6c9ea4cc4cffb" exitCode=0 Feb 17 15:40:29.798433 master-0 kubenswrapper[26425]: I0217 15:40:29.798357 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" event={"ID":"8043042d-31e1-4f30-9bf1-41314d203bb9","Type":"ContainerDied","Data":"f08f1f37f5d4bc663f0c4af463c8b97ac7e12636cc49fbf470e6c9ea4cc4cffb"} Feb 17 15:40:29.798433 master-0 kubenswrapper[26425]: I0217 15:40:29.798391 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" event={"ID":"8043042d-31e1-4f30-9bf1-41314d203bb9","Type":"ContainerStarted","Data":"bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f"} Feb 17 15:40:31.823906 master-0 kubenswrapper[26425]: I0217 15:40:31.823783 26425 generic.go:334] "Generic (PLEG): container finished" podID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerID="efc682eb5a762dca09a01ed789df9444e08549d898292fe7248078f51f3ff676" exitCode=0 Feb 17 15:40:31.824969 master-0 kubenswrapper[26425]: I0217 15:40:31.823899 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" event={"ID":"8043042d-31e1-4f30-9bf1-41314d203bb9","Type":"ContainerDied","Data":"efc682eb5a762dca09a01ed789df9444e08549d898292fe7248078f51f3ff676"} Feb 17 15:40:32.837240 master-0 kubenswrapper[26425]: I0217 15:40:32.837129 26425 generic.go:334] "Generic (PLEG): container finished" podID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerID="a01a201e8da725b3d4413947ca09bd4fb57a048835246ff3715698ca0caa0524" exitCode=0 Feb 17 15:40:32.837240 master-0 kubenswrapper[26425]: I0217 15:40:32.837215 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" event={"ID":"8043042d-31e1-4f30-9bf1-41314d203bb9","Type":"ContainerDied","Data":"a01a201e8da725b3d4413947ca09bd4fb57a048835246ff3715698ca0caa0524"} Feb 17 15:40:34.176332 master-0 kubenswrapper[26425]: I0217 15:40:34.176287 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:34.235298 master-0 kubenswrapper[26425]: I0217 15:40:34.235060 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle\") pod \"8043042d-31e1-4f30-9bf1-41314d203bb9\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " Feb 17 15:40:34.235603 master-0 kubenswrapper[26425]: I0217 15:40:34.235549 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p6sr\" (UniqueName: \"kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr\") pod \"8043042d-31e1-4f30-9bf1-41314d203bb9\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " Feb 17 15:40:34.235753 master-0 kubenswrapper[26425]: I0217 15:40:34.235708 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util\") pod \"8043042d-31e1-4f30-9bf1-41314d203bb9\" (UID: \"8043042d-31e1-4f30-9bf1-41314d203bb9\") " Feb 17 15:40:34.237199 master-0 kubenswrapper[26425]: I0217 15:40:34.237137 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle" (OuterVolumeSpecName: "bundle") pod "8043042d-31e1-4f30-9bf1-41314d203bb9" (UID: "8043042d-31e1-4f30-9bf1-41314d203bb9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:40:34.238467 master-0 kubenswrapper[26425]: I0217 15:40:34.238404 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr" (OuterVolumeSpecName: "kube-api-access-9p6sr") pod "8043042d-31e1-4f30-9bf1-41314d203bb9" (UID: "8043042d-31e1-4f30-9bf1-41314d203bb9"). InnerVolumeSpecName "kube-api-access-9p6sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:40:34.263089 master-0 kubenswrapper[26425]: I0217 15:40:34.263029 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util" (OuterVolumeSpecName: "util") pod "8043042d-31e1-4f30-9bf1-41314d203bb9" (UID: "8043042d-31e1-4f30-9bf1-41314d203bb9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:40:34.338108 master-0 kubenswrapper[26425]: I0217 15:40:34.338012 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:40:34.338108 master-0 kubenswrapper[26425]: I0217 15:40:34.338104 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p6sr\" (UniqueName: \"kubernetes.io/projected/8043042d-31e1-4f30-9bf1-41314d203bb9-kube-api-access-9p6sr\") on node \"master-0\" DevicePath \"\"" Feb 17 15:40:34.338353 master-0 kubenswrapper[26425]: I0217 15:40:34.338128 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8043042d-31e1-4f30-9bf1-41314d203bb9-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:40:34.865551 master-0 kubenswrapper[26425]: I0217 15:40:34.865509 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" event={"ID":"8043042d-31e1-4f30-9bf1-41314d203bb9","Type":"ContainerDied","Data":"bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f"} Feb 17 15:40:34.865551 master-0 kubenswrapper[26425]: I0217 15:40:34.865553 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc16dae71fb27994dcdc5fcb0cc13f5a5958137cc955d3721c655851e9b3201f" Feb 17 15:40:34.865842 master-0 kubenswrapper[26425]: I0217 15:40:34.865823 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx" Feb 17 15:40:41.829304 master-0 kubenswrapper[26425]: I0217 15:40:41.829239 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-59b4cb8ccf-q5dk5"] Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: E0217 15:40:41.829578 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="extract" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: I0217 15:40:41.829593 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="extract" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: E0217 15:40:41.829616 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="pull" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: I0217 15:40:41.829621 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="pull" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: E0217 15:40:41.829639 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="util" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: I0217 15:40:41.829646 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="util" Feb 17 15:40:41.830025 master-0 kubenswrapper[26425]: I0217 15:40:41.829782 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8043042d-31e1-4f30-9bf1-41314d203bb9" containerName="extract" Feb 17 15:40:41.830254 master-0 kubenswrapper[26425]: I0217 15:40:41.830219 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.832168 master-0 kubenswrapper[26425]: I0217 15:40:41.832134 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 17 15:40:41.832541 master-0 kubenswrapper[26425]: I0217 15:40:41.832524 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 17 15:40:41.832792 master-0 kubenswrapper[26425]: I0217 15:40:41.832779 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 17 15:40:41.833205 master-0 kubenswrapper[26425]: I0217 15:40:41.833186 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 17 15:40:41.833444 master-0 kubenswrapper[26425]: I0217 15:40:41.833430 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 17 15:40:41.870756 master-0 kubenswrapper[26425]: I0217 15:40:41.870707 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-webhook-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.871155 master-0 kubenswrapper[26425]: I0217 15:40:41.870810 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrhm\" (UniqueName: \"kubernetes.io/projected/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-kube-api-access-psrhm\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.871155 master-0 kubenswrapper[26425]: I0217 15:40:41.870836 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-socket-dir\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.871155 master-0 kubenswrapper[26425]: I0217 15:40:41.870854 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-apiservice-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.871155 master-0 kubenswrapper[26425]: I0217 15:40:41.870872 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-metrics-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.893717 master-0 kubenswrapper[26425]: I0217 15:40:41.893662 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-59b4cb8ccf-q5dk5"] Feb 17 15:40:41.972196 master-0 kubenswrapper[26425]: I0217 15:40:41.972133 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-metrics-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.972442 master-0 kubenswrapper[26425]: I0217 15:40:41.972268 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-webhook-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.973518 master-0 kubenswrapper[26425]: I0217 15:40:41.972919 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrhm\" (UniqueName: \"kubernetes.io/projected/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-kube-api-access-psrhm\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.973518 master-0 kubenswrapper[26425]: I0217 15:40:41.972950 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-socket-dir\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.973518 master-0 kubenswrapper[26425]: I0217 15:40:41.972971 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-apiservice-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.973748 master-0 kubenswrapper[26425]: I0217 15:40:41.973714 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-socket-dir\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.976367 master-0 kubenswrapper[26425]: I0217 15:40:41.976025 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-webhook-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.977634 master-0 kubenswrapper[26425]: I0217 15:40:41.977251 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-apiservice-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.981150 master-0 kubenswrapper[26425]: I0217 15:40:41.981098 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-metrics-cert\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:41.989499 master-0 kubenswrapper[26425]: I0217 15:40:41.989437 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrhm\" (UniqueName: \"kubernetes.io/projected/7d24c8e3-d4be-4a26-aebe-b9dd39b79aca-kube-api-access-psrhm\") pod \"lvms-operator-59b4cb8ccf-q5dk5\" (UID: \"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca\") " pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:42.146983 master-0 kubenswrapper[26425]: I0217 15:40:42.146920 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:42.562243 master-0 kubenswrapper[26425]: I0217 15:40:42.561731 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-59b4cb8ccf-q5dk5"] Feb 17 15:40:42.565699 master-0 kubenswrapper[26425]: W0217 15:40:42.565641 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d24c8e3_d4be_4a26_aebe_b9dd39b79aca.slice/crio-346cf724909330488ef0c28b7ed98aeb39b3e3e48f01095386bcc8c4f6f2df76 WatchSource:0}: Error finding container 346cf724909330488ef0c28b7ed98aeb39b3e3e48f01095386bcc8c4f6f2df76: Status 404 returned error can't find the container with id 346cf724909330488ef0c28b7ed98aeb39b3e3e48f01095386bcc8c4f6f2df76 Feb 17 15:40:42.953907 master-0 kubenswrapper[26425]: I0217 15:40:42.953839 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" event={"ID":"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca","Type":"ContainerStarted","Data":"346cf724909330488ef0c28b7ed98aeb39b3e3e48f01095386bcc8c4f6f2df76"} Feb 17 15:40:48.008917 master-0 kubenswrapper[26425]: I0217 15:40:48.008837 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" event={"ID":"7d24c8e3-d4be-4a26-aebe-b9dd39b79aca","Type":"ContainerStarted","Data":"6699a81502142b79a12ce2ee0671887f44791c274a4923381059354e09750a15"} Feb 17 15:40:48.009688 master-0 kubenswrapper[26425]: I0217 15:40:48.009554 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:48.014661 master-0 kubenswrapper[26425]: I0217 15:40:48.014610 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" Feb 17 15:40:48.069764 master-0 kubenswrapper[26425]: I0217 15:40:48.069661 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-59b4cb8ccf-q5dk5" podStartSLOduration=2.190443953 podStartE2EDuration="7.069634792s" podCreationTimestamp="2026-02-17 15:40:41 +0000 UTC" firstStartedPulling="2026-02-17 15:40:42.569060947 +0000 UTC m=+1504.460784775" lastFinishedPulling="2026-02-17 15:40:47.448251786 +0000 UTC m=+1509.339975614" observedRunningTime="2026-02-17 15:40:48.03447007 +0000 UTC m=+1509.926193908" watchObservedRunningTime="2026-02-17 15:40:48.069634792 +0000 UTC m=+1509.961358610" Feb 17 15:40:51.969306 master-0 kubenswrapper[26425]: I0217 15:40:51.969225 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9"] Feb 17 15:40:51.971305 master-0 kubenswrapper[26425]: I0217 15:40:51.971267 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:51.982743 master-0 kubenswrapper[26425]: I0217 15:40:51.982678 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9"] Feb 17 15:40:52.078023 master-0 kubenswrapper[26425]: I0217 15:40:52.077955 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.078260 master-0 kubenswrapper[26425]: I0217 15:40:52.078060 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.078260 master-0 kubenswrapper[26425]: I0217 15:40:52.078147 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgjq\" (UniqueName: \"kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.179240 master-0 kubenswrapper[26425]: I0217 15:40:52.179177 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.179503 master-0 kubenswrapper[26425]: I0217 15:40:52.179331 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.179503 master-0 kubenswrapper[26425]: I0217 15:40:52.179387 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkgjq\" (UniqueName: \"kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.180097 master-0 kubenswrapper[26425]: I0217 15:40:52.180056 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.181038 master-0 kubenswrapper[26425]: I0217 15:40:52.180965 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.204703 master-0 kubenswrapper[26425]: I0217 15:40:52.204613 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkgjq\" (UniqueName: \"kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.290797 master-0 kubenswrapper[26425]: I0217 15:40:52.290644 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:40:52.877018 master-0 kubenswrapper[26425]: I0217 15:40:52.876941 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9"] Feb 17 15:40:52.882150 master-0 kubenswrapper[26425]: W0217 15:40:52.880827 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c7b7da_b5db_4665_8dc0_e7f1782ea90e.slice/crio-9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f WatchSource:0}: Error finding container 9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f: Status 404 returned error can't find the container with id 9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f Feb 17 15:40:53.053309 master-0 kubenswrapper[26425]: I0217 15:40:53.053248 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" event={"ID":"43c7b7da-b5db-4665-8dc0-e7f1782ea90e","Type":"ContainerStarted","Data":"9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f"} Feb 17 15:40:53.575567 master-0 kubenswrapper[26425]: I0217 15:40:53.575364 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds"] Feb 17 15:40:53.578171 master-0 kubenswrapper[26425]: I0217 15:40:53.578107 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.597275 master-0 kubenswrapper[26425]: I0217 15:40:53.597171 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds"] Feb 17 15:40:53.608734 master-0 kubenswrapper[26425]: I0217 15:40:53.608635 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.609175 master-0 kubenswrapper[26425]: I0217 15:40:53.609086 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf8xc\" (UniqueName: \"kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.609282 master-0 kubenswrapper[26425]: I0217 15:40:53.609227 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.711413 master-0 kubenswrapper[26425]: I0217 15:40:53.711320 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf8xc\" (UniqueName: \"kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.711678 master-0 kubenswrapper[26425]: I0217 15:40:53.711426 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.711678 master-0 kubenswrapper[26425]: I0217 15:40:53.711528 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.713104 master-0 kubenswrapper[26425]: I0217 15:40:53.712333 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.713104 master-0 kubenswrapper[26425]: I0217 15:40:53.712665 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.736112 master-0 kubenswrapper[26425]: I0217 15:40:53.736048 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf8xc\" (UniqueName: \"kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:53.909049 master-0 kubenswrapper[26425]: I0217 15:40:53.908964 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:40:54.065127 master-0 kubenswrapper[26425]: I0217 15:40:54.065055 26425 generic.go:334] "Generic (PLEG): container finished" podID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerID="33caae37e633ac64d53a78cbcedd68942fa03512eb1fb6d7b03a30d1e3ed7f4a" exitCode=0 Feb 17 15:40:54.065127 master-0 kubenswrapper[26425]: I0217 15:40:54.065109 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" event={"ID":"43c7b7da-b5db-4665-8dc0-e7f1782ea90e","Type":"ContainerDied","Data":"33caae37e633ac64d53a78cbcedd68942fa03512eb1fb6d7b03a30d1e3ed7f4a"} Feb 17 15:40:54.369261 master-0 kubenswrapper[26425]: I0217 15:40:54.369191 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc"] Feb 17 15:40:54.370862 master-0 kubenswrapper[26425]: I0217 15:40:54.370821 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.380554 master-0 kubenswrapper[26425]: I0217 15:40:54.378120 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc"] Feb 17 15:40:54.426454 master-0 kubenswrapper[26425]: I0217 15:40:54.426383 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds"] Feb 17 15:40:54.431671 master-0 kubenswrapper[26425]: W0217 15:40:54.431604 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84b7d8ef_eea6_42d1_bf43_330db9114949.slice/crio-d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea WatchSource:0}: Error finding container d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea: Status 404 returned error can't find the container with id d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea Feb 17 15:40:54.434175 master-0 kubenswrapper[26425]: I0217 15:40:54.434084 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.434332 master-0 kubenswrapper[26425]: I0217 15:40:54.434268 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.434740 master-0 kubenswrapper[26425]: I0217 15:40:54.434700 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfh64\" (UniqueName: \"kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.536629 master-0 kubenswrapper[26425]: I0217 15:40:54.536580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.536800 master-0 kubenswrapper[26425]: I0217 15:40:54.536667 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.536800 master-0 kubenswrapper[26425]: I0217 15:40:54.536717 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfh64\" (UniqueName: \"kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.537069 master-0 kubenswrapper[26425]: I0217 15:40:54.537030 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.537440 master-0 kubenswrapper[26425]: I0217 15:40:54.537378 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.553241 master-0 kubenswrapper[26425]: I0217 15:40:54.553184 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfh64\" (UniqueName: \"kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:54.696872 master-0 kubenswrapper[26425]: I0217 15:40:54.696807 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:40:55.075439 master-0 kubenswrapper[26425]: I0217 15:40:55.075217 26425 generic.go:334] "Generic (PLEG): container finished" podID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerID="3fd052e51d4317fa9015f8f7c6855c82eea1348ab257900283151392dd29a4c9" exitCode=0 Feb 17 15:40:55.075439 master-0 kubenswrapper[26425]: I0217 15:40:55.075298 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" event={"ID":"84b7d8ef-eea6-42d1-bf43-330db9114949","Type":"ContainerDied","Data":"3fd052e51d4317fa9015f8f7c6855c82eea1348ab257900283151392dd29a4c9"} Feb 17 15:40:55.076182 master-0 kubenswrapper[26425]: I0217 15:40:55.075441 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" event={"ID":"84b7d8ef-eea6-42d1-bf43-330db9114949","Type":"ContainerStarted","Data":"d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea"} Feb 17 15:40:55.172677 master-0 kubenswrapper[26425]: I0217 15:40:55.172617 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc"] Feb 17 15:40:55.182035 master-0 kubenswrapper[26425]: W0217 15:40:55.181836 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a3b542_e6d2_487b_a39d_a31711a2621e.slice/crio-183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac WatchSource:0}: Error finding container 183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac: Status 404 returned error can't find the container with id 183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac Feb 17 15:40:56.087201 master-0 kubenswrapper[26425]: I0217 15:40:56.087124 26425 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerID="b56413f212c876bdc349c9a1bab287e6bf7bd5aaceee38066b57d3862acba83a" exitCode=0 Feb 17 15:40:56.088225 master-0 kubenswrapper[26425]: I0217 15:40:56.087209 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" event={"ID":"f9a3b542-e6d2-487b-a39d-a31711a2621e","Type":"ContainerDied","Data":"b56413f212c876bdc349c9a1bab287e6bf7bd5aaceee38066b57d3862acba83a"} Feb 17 15:40:56.088225 master-0 kubenswrapper[26425]: I0217 15:40:56.087287 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" event={"ID":"f9a3b542-e6d2-487b-a39d-a31711a2621e","Type":"ContainerStarted","Data":"183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac"} Feb 17 15:40:58.111123 master-0 kubenswrapper[26425]: I0217 15:40:58.111043 26425 generic.go:334] "Generic (PLEG): container finished" podID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerID="35c672e8afd33b8d9e7b43e9493b0324f3da2883c10a9c72382a5530df044d02" exitCode=0 Feb 17 15:40:58.113547 master-0 kubenswrapper[26425]: I0217 15:40:58.111116 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" event={"ID":"84b7d8ef-eea6-42d1-bf43-330db9114949","Type":"ContainerDied","Data":"35c672e8afd33b8d9e7b43e9493b0324f3da2883c10a9c72382a5530df044d02"} Feb 17 15:40:58.116337 master-0 kubenswrapper[26425]: I0217 15:40:58.116292 26425 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerID="40ff1951ae6bc5609e0068955bdf82b187f7c41ac5bebba8c09e1640facbdcbf" exitCode=0 Feb 17 15:40:58.116600 master-0 kubenswrapper[26425]: I0217 15:40:58.116370 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" event={"ID":"f9a3b542-e6d2-487b-a39d-a31711a2621e","Type":"ContainerDied","Data":"40ff1951ae6bc5609e0068955bdf82b187f7c41ac5bebba8c09e1640facbdcbf"} Feb 17 15:40:58.122265 master-0 kubenswrapper[26425]: I0217 15:40:58.122187 26425 generic.go:334] "Generic (PLEG): container finished" podID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerID="8f163027c03acaf44b4049e8764dd652cd4a97320446e8e79881aed50e0e0b4a" exitCode=0 Feb 17 15:40:58.122880 master-0 kubenswrapper[26425]: I0217 15:40:58.122285 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" event={"ID":"43c7b7da-b5db-4665-8dc0-e7f1782ea90e","Type":"ContainerDied","Data":"8f163027c03acaf44b4049e8764dd652cd4a97320446e8e79881aed50e0e0b4a"} Feb 17 15:40:59.137432 master-0 kubenswrapper[26425]: I0217 15:40:59.137344 26425 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerID="ca90d9c89e036fdfda00f974ed44f6a96741e72d7e7d24b4ecef29fd793e58a6" exitCode=0 Feb 17 15:40:59.138671 master-0 kubenswrapper[26425]: I0217 15:40:59.137538 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" event={"ID":"f9a3b542-e6d2-487b-a39d-a31711a2621e","Type":"ContainerDied","Data":"ca90d9c89e036fdfda00f974ed44f6a96741e72d7e7d24b4ecef29fd793e58a6"} Feb 17 15:40:59.142713 master-0 kubenswrapper[26425]: I0217 15:40:59.142652 26425 generic.go:334] "Generic (PLEG): container finished" podID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerID="877360157be32a23d94aaaf5591d0edc7c42cb2e8c9fbe3f527da4780cb4b86d" exitCode=0 Feb 17 15:40:59.142871 master-0 kubenswrapper[26425]: I0217 15:40:59.142752 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" event={"ID":"43c7b7da-b5db-4665-8dc0-e7f1782ea90e","Type":"ContainerDied","Data":"877360157be32a23d94aaaf5591d0edc7c42cb2e8c9fbe3f527da4780cb4b86d"} Feb 17 15:40:59.147202 master-0 kubenswrapper[26425]: I0217 15:40:59.147140 26425 generic.go:334] "Generic (PLEG): container finished" podID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerID="1072e591de1a3075e3fdc381beb0ecf3b84a890f5bd89e90e320ca544098e705" exitCode=0 Feb 17 15:40:59.147309 master-0 kubenswrapper[26425]: I0217 15:40:59.147201 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" event={"ID":"84b7d8ef-eea6-42d1-bf43-330db9114949","Type":"ContainerDied","Data":"1072e591de1a3075e3fdc381beb0ecf3b84a890f5bd89e90e320ca544098e705"} Feb 17 15:41:00.656088 master-0 kubenswrapper[26425]: I0217 15:41:00.656034 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:41:00.669900 master-0 kubenswrapper[26425]: I0217 15:41:00.669846 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle\") pod \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " Feb 17 15:41:00.669990 master-0 kubenswrapper[26425]: I0217 15:41:00.669967 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkgjq\" (UniqueName: \"kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq\") pod \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " Feb 17 15:41:00.670058 master-0 kubenswrapper[26425]: I0217 15:41:00.670020 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util\") pod \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\" (UID: \"43c7b7da-b5db-4665-8dc0-e7f1782ea90e\") " Feb 17 15:41:00.672155 master-0 kubenswrapper[26425]: I0217 15:41:00.672081 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle" (OuterVolumeSpecName: "bundle") pod "43c7b7da-b5db-4665-8dc0-e7f1782ea90e" (UID: "43c7b7da-b5db-4665-8dc0-e7f1782ea90e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.675822 master-0 kubenswrapper[26425]: I0217 15:41:00.675701 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq" (OuterVolumeSpecName: "kube-api-access-fkgjq") pod "43c7b7da-b5db-4665-8dc0-e7f1782ea90e" (UID: "43c7b7da-b5db-4665-8dc0-e7f1782ea90e"). InnerVolumeSpecName "kube-api-access-fkgjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:41:00.691550 master-0 kubenswrapper[26425]: I0217 15:41:00.691492 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util" (OuterVolumeSpecName: "util") pod "43c7b7da-b5db-4665-8dc0-e7f1782ea90e" (UID: "43c7b7da-b5db-4665-8dc0-e7f1782ea90e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.735814 master-0 kubenswrapper[26425]: I0217 15:41:00.735767 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:41:00.742006 master-0 kubenswrapper[26425]: I0217 15:41:00.741951 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:41:00.772327 master-0 kubenswrapper[26425]: I0217 15:41:00.772185 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfh64\" (UniqueName: \"kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64\") pod \"f9a3b542-e6d2-487b-a39d-a31711a2621e\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " Feb 17 15:41:00.772547 master-0 kubenswrapper[26425]: I0217 15:41:00.772446 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle\") pod \"84b7d8ef-eea6-42d1-bf43-330db9114949\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " Feb 17 15:41:00.772610 master-0 kubenswrapper[26425]: I0217 15:41:00.772584 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util\") pod \"84b7d8ef-eea6-42d1-bf43-330db9114949\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " Feb 17 15:41:00.772714 master-0 kubenswrapper[26425]: I0217 15:41:00.772673 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle\") pod \"f9a3b542-e6d2-487b-a39d-a31711a2621e\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " Feb 17 15:41:00.772816 master-0 kubenswrapper[26425]: I0217 15:41:00.772780 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf8xc\" (UniqueName: \"kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc\") pod \"84b7d8ef-eea6-42d1-bf43-330db9114949\" (UID: \"84b7d8ef-eea6-42d1-bf43-330db9114949\") " Feb 17 15:41:00.772918 master-0 kubenswrapper[26425]: I0217 15:41:00.772880 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util\") pod \"f9a3b542-e6d2-487b-a39d-a31711a2621e\" (UID: \"f9a3b542-e6d2-487b-a39d-a31711a2621e\") " Feb 17 15:41:00.773734 master-0 kubenswrapper[26425]: I0217 15:41:00.773662 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle" (OuterVolumeSpecName: "bundle") pod "f9a3b542-e6d2-487b-a39d-a31711a2621e" (UID: "f9a3b542-e6d2-487b-a39d-a31711a2621e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.774519 master-0 kubenswrapper[26425]: I0217 15:41:00.774437 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle" (OuterVolumeSpecName: "bundle") pod "84b7d8ef-eea6-42d1-bf43-330db9114949" (UID: "84b7d8ef-eea6-42d1-bf43-330db9114949"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.776265 master-0 kubenswrapper[26425]: I0217 15:41:00.775937 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkgjq\" (UniqueName: \"kubernetes.io/projected/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-kube-api-access-fkgjq\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.776265 master-0 kubenswrapper[26425]: I0217 15:41:00.776001 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.776265 master-0 kubenswrapper[26425]: I0217 15:41:00.776027 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.776265 master-0 kubenswrapper[26425]: I0217 15:41:00.776048 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.776265 master-0 kubenswrapper[26425]: I0217 15:41:00.776068 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43c7b7da-b5db-4665-8dc0-e7f1782ea90e-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.778881 master-0 kubenswrapper[26425]: I0217 15:41:00.778818 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc" (OuterVolumeSpecName: "kube-api-access-nf8xc") pod "84b7d8ef-eea6-42d1-bf43-330db9114949" (UID: "84b7d8ef-eea6-42d1-bf43-330db9114949"). InnerVolumeSpecName "kube-api-access-nf8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:41:00.781743 master-0 kubenswrapper[26425]: I0217 15:41:00.781702 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64" (OuterVolumeSpecName: "kube-api-access-pfh64") pod "f9a3b542-e6d2-487b-a39d-a31711a2621e" (UID: "f9a3b542-e6d2-487b-a39d-a31711a2621e"). InnerVolumeSpecName "kube-api-access-pfh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:41:00.798322 master-0 kubenswrapper[26425]: I0217 15:41:00.798228 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util" (OuterVolumeSpecName: "util") pod "84b7d8ef-eea6-42d1-bf43-330db9114949" (UID: "84b7d8ef-eea6-42d1-bf43-330db9114949"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.831053 master-0 kubenswrapper[26425]: I0217 15:41:00.830985 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util" (OuterVolumeSpecName: "util") pod "f9a3b542-e6d2-487b-a39d-a31711a2621e" (UID: "f9a3b542-e6d2-487b-a39d-a31711a2621e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:00.877790 master-0 kubenswrapper[26425]: I0217 15:41:00.877739 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfh64\" (UniqueName: \"kubernetes.io/projected/f9a3b542-e6d2-487b-a39d-a31711a2621e-kube-api-access-pfh64\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.878071 master-0 kubenswrapper[26425]: I0217 15:41:00.878056 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84b7d8ef-eea6-42d1-bf43-330db9114949-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.878187 master-0 kubenswrapper[26425]: I0217 15:41:00.878176 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b542-e6d2-487b-a39d-a31711a2621e-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:00.878277 master-0 kubenswrapper[26425]: I0217 15:41:00.878262 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf8xc\" (UniqueName: \"kubernetes.io/projected/84b7d8ef-eea6-42d1-bf43-330db9114949-kube-api-access-nf8xc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:01.178440 master-0 kubenswrapper[26425]: I0217 15:41:01.178349 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" event={"ID":"43c7b7da-b5db-4665-8dc0-e7f1782ea90e","Type":"ContainerDied","Data":"9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f"} Feb 17 15:41:01.179075 master-0 kubenswrapper[26425]: I0217 15:41:01.179028 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9970ff888e83276656a0e0975e363e20535979ad473d18caa436ed693d72b12f" Feb 17 15:41:01.179075 master-0 kubenswrapper[26425]: I0217 15:41:01.178363 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9" Feb 17 15:41:01.183292 master-0 kubenswrapper[26425]: I0217 15:41:01.183240 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" Feb 17 15:41:01.183497 master-0 kubenswrapper[26425]: I0217 15:41:01.183287 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds" event={"ID":"84b7d8ef-eea6-42d1-bf43-330db9114949","Type":"ContainerDied","Data":"d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea"} Feb 17 15:41:01.183630 master-0 kubenswrapper[26425]: I0217 15:41:01.183557 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18003e0a501e43cd2f2c7c7a8a05a7ed93be2d932d043a87e6360be9ff7f8ea" Feb 17 15:41:01.186625 master-0 kubenswrapper[26425]: I0217 15:41:01.186570 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" event={"ID":"f9a3b542-e6d2-487b-a39d-a31711a2621e","Type":"ContainerDied","Data":"183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac"} Feb 17 15:41:01.186741 master-0 kubenswrapper[26425]: I0217 15:41:01.186630 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183b345959f281ea0ccd4aeae2bdb1f7d12f5cc59079fa3a8886927a82268dac" Feb 17 15:41:01.186741 master-0 kubenswrapper[26425]: I0217 15:41:01.186662 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc" Feb 17 15:41:02.028008 master-0 kubenswrapper[26425]: I0217 15:41:02.027949 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg"] Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: E0217 15:41:02.028389 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="util" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: I0217 15:41:02.028411 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="util" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: E0217 15:41:02.028449 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="extract" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: I0217 15:41:02.028494 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="extract" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: E0217 15:41:02.028517 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="pull" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: I0217 15:41:02.028530 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="pull" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: E0217 15:41:02.028558 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="extract" Feb 17 15:41:02.028583 master-0 kubenswrapper[26425]: I0217 15:41:02.028570 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="extract" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: E0217 15:41:02.028596 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="pull" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: I0217 15:41:02.028609 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="pull" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: E0217 15:41:02.028633 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="pull" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: I0217 15:41:02.028646 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="pull" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: E0217 15:41:02.028687 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="extract" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: I0217 15:41:02.028702 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="extract" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: E0217 15:41:02.028724 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="util" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: I0217 15:41:02.028736 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="util" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: E0217 15:41:02.028760 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="util" Feb 17 15:41:02.028914 master-0 kubenswrapper[26425]: I0217 15:41:02.028772 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="util" Feb 17 15:41:02.029292 master-0 kubenswrapper[26425]: I0217 15:41:02.028991 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a3b542-e6d2-487b-a39d-a31711a2621e" containerName="extract" Feb 17 15:41:02.029292 master-0 kubenswrapper[26425]: I0217 15:41:02.029082 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b7d8ef-eea6-42d1-bf43-330db9114949" containerName="extract" Feb 17 15:41:02.029292 master-0 kubenswrapper[26425]: I0217 15:41:02.029103 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c7b7da-b5db-4665-8dc0-e7f1782ea90e" containerName="extract" Feb 17 15:41:02.030879 master-0 kubenswrapper[26425]: I0217 15:41:02.030847 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.051635 master-0 kubenswrapper[26425]: I0217 15:41:02.051590 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg"] Feb 17 15:41:02.101027 master-0 kubenswrapper[26425]: I0217 15:41:02.100958 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.101314 master-0 kubenswrapper[26425]: I0217 15:41:02.101049 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97gcw\" (UniqueName: \"kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.101489 master-0 kubenswrapper[26425]: I0217 15:41:02.101380 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.203732 master-0 kubenswrapper[26425]: I0217 15:41:02.203619 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.204027 master-0 kubenswrapper[26425]: I0217 15:41:02.203870 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.204396 master-0 kubenswrapper[26425]: I0217 15:41:02.204125 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97gcw\" (UniqueName: \"kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.204832 master-0 kubenswrapper[26425]: I0217 15:41:02.204780 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.205638 master-0 kubenswrapper[26425]: I0217 15:41:02.205450 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.224621 master-0 kubenswrapper[26425]: I0217 15:41:02.224550 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97gcw\" (UniqueName: \"kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.352247 master-0 kubenswrapper[26425]: I0217 15:41:02.352112 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:02.838851 master-0 kubenswrapper[26425]: I0217 15:41:02.838776 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg"] Feb 17 15:41:02.844394 master-0 kubenswrapper[26425]: W0217 15:41:02.844259 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb66e9f21_8fe9_45ec_a922_605866dc86fb.slice/crio-873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb WatchSource:0}: Error finding container 873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb: Status 404 returned error can't find the container with id 873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb Feb 17 15:41:03.208610 master-0 kubenswrapper[26425]: I0217 15:41:03.208512 26425 generic.go:334] "Generic (PLEG): container finished" podID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerID="9af478ae69c500205220f8fa4c637d10714f258197494090b6fb346dfed79cb4" exitCode=0 Feb 17 15:41:03.208610 master-0 kubenswrapper[26425]: I0217 15:41:03.208586 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerDied","Data":"9af478ae69c500205220f8fa4c637d10714f258197494090b6fb346dfed79cb4"} Feb 17 15:41:03.209944 master-0 kubenswrapper[26425]: I0217 15:41:03.208629 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerStarted","Data":"873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb"} Feb 17 15:41:05.247605 master-0 kubenswrapper[26425]: I0217 15:41:05.241270 26425 generic.go:334] "Generic (PLEG): container finished" podID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerID="dbb4a1f8e64ead974970f47c64e93fca4eaaf728c6e707a50fdf0185903db52b" exitCode=0 Feb 17 15:41:05.247605 master-0 kubenswrapper[26425]: I0217 15:41:05.241325 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerDied","Data":"dbb4a1f8e64ead974970f47c64e93fca4eaaf728c6e707a50fdf0185903db52b"} Feb 17 15:41:06.255825 master-0 kubenswrapper[26425]: I0217 15:41:06.255738 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerStarted","Data":"1ff6408182ece0f82a7c6f1cdab7e8a82c3d0951c69f93ea303c9e597443aadf"} Feb 17 15:41:06.430990 master-0 kubenswrapper[26425]: I0217 15:41:06.430759 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" podStartSLOduration=4.082896592 podStartE2EDuration="5.430714439s" podCreationTimestamp="2026-02-17 15:41:01 +0000 UTC" firstStartedPulling="2026-02-17 15:41:03.211433298 +0000 UTC m=+1525.103157116" lastFinishedPulling="2026-02-17 15:41:04.559251145 +0000 UTC m=+1526.450974963" observedRunningTime="2026-02-17 15:41:06.416159877 +0000 UTC m=+1528.307883775" watchObservedRunningTime="2026-02-17 15:41:06.430714439 +0000 UTC m=+1528.322438337" Feb 17 15:41:06.847795 master-0 kubenswrapper[26425]: I0217 15:41:06.847617 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4"] Feb 17 15:41:06.849143 master-0 kubenswrapper[26425]: I0217 15:41:06.849096 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:06.851622 master-0 kubenswrapper[26425]: I0217 15:41:06.851579 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 17 15:41:06.852232 master-0 kubenswrapper[26425]: I0217 15:41:06.852182 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 17 15:41:06.862244 master-0 kubenswrapper[26425]: I0217 15:41:06.862184 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4"] Feb 17 15:41:06.919144 master-0 kubenswrapper[26425]: I0217 15:41:06.919070 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9xj2\" (UniqueName: \"kubernetes.io/projected/6b68f442-54f6-4173-8574-6639cf15fdce-kube-api-access-s9xj2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:06.919379 master-0 kubenswrapper[26425]: I0217 15:41:06.919179 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b68f442-54f6-4173-8574-6639cf15fdce-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.020736 master-0 kubenswrapper[26425]: I0217 15:41:07.020675 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9xj2\" (UniqueName: \"kubernetes.io/projected/6b68f442-54f6-4173-8574-6639cf15fdce-kube-api-access-s9xj2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.020736 master-0 kubenswrapper[26425]: I0217 15:41:07.020731 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b68f442-54f6-4173-8574-6639cf15fdce-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.021245 master-0 kubenswrapper[26425]: I0217 15:41:07.021222 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b68f442-54f6-4173-8574-6639cf15fdce-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.044161 master-0 kubenswrapper[26425]: I0217 15:41:07.044116 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9xj2\" (UniqueName: \"kubernetes.io/projected/6b68f442-54f6-4173-8574-6639cf15fdce-kube-api-access-s9xj2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-vkmx4\" (UID: \"6b68f442-54f6-4173-8574-6639cf15fdce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.176430 master-0 kubenswrapper[26425]: I0217 15:41:07.176363 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" Feb 17 15:41:07.354924 master-0 kubenswrapper[26425]: I0217 15:41:07.347839 26425 generic.go:334] "Generic (PLEG): container finished" podID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerID="1ff6408182ece0f82a7c6f1cdab7e8a82c3d0951c69f93ea303c9e597443aadf" exitCode=0 Feb 17 15:41:07.354924 master-0 kubenswrapper[26425]: I0217 15:41:07.347922 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerDied","Data":"1ff6408182ece0f82a7c6f1cdab7e8a82c3d0951c69f93ea303c9e597443aadf"} Feb 17 15:41:07.678563 master-0 kubenswrapper[26425]: W0217 15:41:07.677842 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b68f442_54f6_4173_8574_6639cf15fdce.slice/crio-7d190e8c9e881a811f67b7785d1d53886f0840b6040eb0e3c73d27a561c75dcc WatchSource:0}: Error finding container 7d190e8c9e881a811f67b7785d1d53886f0840b6040eb0e3c73d27a561c75dcc: Status 404 returned error can't find the container with id 7d190e8c9e881a811f67b7785d1d53886f0840b6040eb0e3c73d27a561c75dcc Feb 17 15:41:07.679852 master-0 kubenswrapper[26425]: I0217 15:41:07.679811 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4"] Feb 17 15:41:08.354771 master-0 kubenswrapper[26425]: I0217 15:41:08.354690 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" event={"ID":"6b68f442-54f6-4173-8574-6639cf15fdce","Type":"ContainerStarted","Data":"7d190e8c9e881a811f67b7785d1d53886f0840b6040eb0e3c73d27a561c75dcc"} Feb 17 15:41:08.813213 master-0 kubenswrapper[26425]: I0217 15:41:08.813146 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:08.888034 master-0 kubenswrapper[26425]: I0217 15:41:08.887958 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97gcw\" (UniqueName: \"kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw\") pod \"b66e9f21-8fe9-45ec-a922-605866dc86fb\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " Feb 17 15:41:08.888346 master-0 kubenswrapper[26425]: I0217 15:41:08.888065 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle\") pod \"b66e9f21-8fe9-45ec-a922-605866dc86fb\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " Feb 17 15:41:08.888346 master-0 kubenswrapper[26425]: I0217 15:41:08.888177 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util\") pod \"b66e9f21-8fe9-45ec-a922-605866dc86fb\" (UID: \"b66e9f21-8fe9-45ec-a922-605866dc86fb\") " Feb 17 15:41:08.904816 master-0 kubenswrapper[26425]: I0217 15:41:08.900235 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util" (OuterVolumeSpecName: "util") pod "b66e9f21-8fe9-45ec-a922-605866dc86fb" (UID: "b66e9f21-8fe9-45ec-a922-605866dc86fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:08.904816 master-0 kubenswrapper[26425]: I0217 15:41:08.900807 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle" (OuterVolumeSpecName: "bundle") pod "b66e9f21-8fe9-45ec-a922-605866dc86fb" (UID: "b66e9f21-8fe9-45ec-a922-605866dc86fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:41:08.927900 master-0 kubenswrapper[26425]: I0217 15:41:08.910908 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw" (OuterVolumeSpecName: "kube-api-access-97gcw") pod "b66e9f21-8fe9-45ec-a922-605866dc86fb" (UID: "b66e9f21-8fe9-45ec-a922-605866dc86fb"). InnerVolumeSpecName "kube-api-access-97gcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:41:08.993245 master-0 kubenswrapper[26425]: I0217 15:41:08.992248 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97gcw\" (UniqueName: \"kubernetes.io/projected/b66e9f21-8fe9-45ec-a922-605866dc86fb-kube-api-access-97gcw\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:08.993245 master-0 kubenswrapper[26425]: I0217 15:41:08.992293 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:08.993245 master-0 kubenswrapper[26425]: I0217 15:41:08.992304 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b66e9f21-8fe9-45ec-a922-605866dc86fb-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:41:09.367117 master-0 kubenswrapper[26425]: I0217 15:41:09.367050 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" event={"ID":"b66e9f21-8fe9-45ec-a922-605866dc86fb","Type":"ContainerDied","Data":"873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb"} Feb 17 15:41:09.367350 master-0 kubenswrapper[26425]: I0217 15:41:09.367128 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873335f145d2ca915d06e881239582c37f7a1f38da88a0ce5074f887df640ccb" Feb 17 15:41:09.367350 master-0 kubenswrapper[26425]: I0217 15:41:09.367177 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg" Feb 17 15:41:11.412853 master-0 kubenswrapper[26425]: I0217 15:41:11.412701 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" event={"ID":"6b68f442-54f6-4173-8574-6639cf15fdce","Type":"ContainerStarted","Data":"d4c344118fe064cc04cd1289cf43995fc5ff7530febc9c3531d4b225faa3971d"} Feb 17 15:41:11.441819 master-0 kubenswrapper[26425]: I0217 15:41:11.441729 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-vkmx4" podStartSLOduration=1.982300969 podStartE2EDuration="5.441695005s" podCreationTimestamp="2026-02-17 15:41:06 +0000 UTC" firstStartedPulling="2026-02-17 15:41:07.681511455 +0000 UTC m=+1529.573235273" lastFinishedPulling="2026-02-17 15:41:11.140905481 +0000 UTC m=+1533.032629309" observedRunningTime="2026-02-17 15:41:11.440722951 +0000 UTC m=+1533.332446839" watchObservedRunningTime="2026-02-17 15:41:11.441695005 +0000 UTC m=+1533.333418863" Feb 17 15:41:15.730859 master-0 kubenswrapper[26425]: I0217 15:41:15.730795 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-d6jf7"] Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: E0217 15:41:15.731073 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="pull" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: I0217 15:41:15.731085 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="pull" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: E0217 15:41:15.731110 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="util" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: I0217 15:41:15.731116 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="util" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: E0217 15:41:15.731123 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="extract" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: I0217 15:41:15.731130 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="extract" Feb 17 15:41:15.731505 master-0 kubenswrapper[26425]: I0217 15:41:15.731299 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66e9f21-8fe9-45ec-a922-605866dc86fb" containerName="extract" Feb 17 15:41:15.731815 master-0 kubenswrapper[26425]: I0217 15:41:15.731789 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.735045 master-0 kubenswrapper[26425]: I0217 15:41:15.735004 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 15:41:15.735336 master-0 kubenswrapper[26425]: I0217 15:41:15.735313 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 15:41:15.753061 master-0 kubenswrapper[26425]: I0217 15:41:15.752996 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-d6jf7"] Feb 17 15:41:15.833887 master-0 kubenswrapper[26425]: I0217 15:41:15.833822 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpltd\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-kube-api-access-hpltd\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.834109 master-0 kubenswrapper[26425]: I0217 15:41:15.834012 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.935805 master-0 kubenswrapper[26425]: I0217 15:41:15.935737 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.935805 master-0 kubenswrapper[26425]: I0217 15:41:15.935805 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpltd\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-kube-api-access-hpltd\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.953349 master-0 kubenswrapper[26425]: I0217 15:41:15.953293 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpltd\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-kube-api-access-hpltd\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:15.953643 master-0 kubenswrapper[26425]: I0217 15:41:15.953623 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57c8553f-5baa-4bbd-9084-f1cbe139e528-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-d6jf7\" (UID: \"57c8553f-5baa-4bbd-9084-f1cbe139e528\") " pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:16.046994 master-0 kubenswrapper[26425]: I0217 15:41:16.046835 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:16.490881 master-0 kubenswrapper[26425]: I0217 15:41:16.490358 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-d6jf7"] Feb 17 15:41:16.498380 master-0 kubenswrapper[26425]: W0217 15:41:16.498330 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c8553f_5baa_4bbd_9084_f1cbe139e528.slice/crio-bc2417cdeb058ad1ebe9df0973ece97e7982dec3c3edc8333fb72d286d762ef8 WatchSource:0}: Error finding container bc2417cdeb058ad1ebe9df0973ece97e7982dec3c3edc8333fb72d286d762ef8: Status 404 returned error can't find the container with id bc2417cdeb058ad1ebe9df0973ece97e7982dec3c3edc8333fb72d286d762ef8 Feb 17 15:41:16.710647 master-0 kubenswrapper[26425]: I0217 15:41:16.710569 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-62r82"] Feb 17 15:41:16.711550 master-0 kubenswrapper[26425]: I0217 15:41:16.711437 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.740511 master-0 kubenswrapper[26425]: I0217 15:41:16.732541 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-62r82"] Feb 17 15:41:16.850886 master-0 kubenswrapper[26425]: I0217 15:41:16.850751 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.850886 master-0 kubenswrapper[26425]: I0217 15:41:16.850854 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6sc\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-kube-api-access-9z6sc\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.953252 master-0 kubenswrapper[26425]: I0217 15:41:16.952772 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6sc\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-kube-api-access-9z6sc\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.953652 master-0 kubenswrapper[26425]: I0217 15:41:16.953564 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.971951 master-0 kubenswrapper[26425]: I0217 15:41:16.971878 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:16.974811 master-0 kubenswrapper[26425]: I0217 15:41:16.974753 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6sc\" (UniqueName: \"kubernetes.io/projected/9b798c84-5e08-40d8-9e97-5dc403c2b31b-kube-api-access-9z6sc\") pod \"cert-manager-cainjector-5545bd876-62r82\" (UID: \"9b798c84-5e08-40d8-9e97-5dc403c2b31b\") " pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:17.036723 master-0 kubenswrapper[26425]: I0217 15:41:17.036664 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" Feb 17 15:41:17.476382 master-0 kubenswrapper[26425]: I0217 15:41:17.476258 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" event={"ID":"57c8553f-5baa-4bbd-9084-f1cbe139e528","Type":"ContainerStarted","Data":"bc2417cdeb058ad1ebe9df0973ece97e7982dec3c3edc8333fb72d286d762ef8"} Feb 17 15:41:17.480728 master-0 kubenswrapper[26425]: I0217 15:41:17.480682 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-62r82"] Feb 17 15:41:17.492847 master-0 kubenswrapper[26425]: W0217 15:41:17.492793 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b798c84_5e08_40d8_9e97_5dc403c2b31b.slice/crio-dabbd84409d3759b5cba10b21dfb50e553d29225cceb0f3a82236285c9cb5725 WatchSource:0}: Error finding container dabbd84409d3759b5cba10b21dfb50e553d29225cceb0f3a82236285c9cb5725: Status 404 returned error can't find the container with id dabbd84409d3759b5cba10b21dfb50e553d29225cceb0f3a82236285c9cb5725 Feb 17 15:41:18.182867 master-0 kubenswrapper[26425]: I0217 15:41:18.182800 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-vbkqw"] Feb 17 15:41:18.183897 master-0 kubenswrapper[26425]: I0217 15:41:18.183869 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" Feb 17 15:41:18.185905 master-0 kubenswrapper[26425]: I0217 15:41:18.185870 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 15:41:18.186901 master-0 kubenswrapper[26425]: I0217 15:41:18.186860 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 15:41:18.213124 master-0 kubenswrapper[26425]: I0217 15:41:18.213070 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-vbkqw"] Feb 17 15:41:18.274885 master-0 kubenswrapper[26425]: I0217 15:41:18.274810 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kng7f\" (UniqueName: \"kubernetes.io/projected/160d346f-1b4e-42fc-b8ea-17d3e9af02f2-kube-api-access-kng7f\") pod \"nmstate-operator-694c9596b7-vbkqw\" (UID: \"160d346f-1b4e-42fc-b8ea-17d3e9af02f2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" Feb 17 15:41:18.376764 master-0 kubenswrapper[26425]: I0217 15:41:18.376672 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kng7f\" (UniqueName: \"kubernetes.io/projected/160d346f-1b4e-42fc-b8ea-17d3e9af02f2-kube-api-access-kng7f\") pod \"nmstate-operator-694c9596b7-vbkqw\" (UID: \"160d346f-1b4e-42fc-b8ea-17d3e9af02f2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" Feb 17 15:41:18.403664 master-0 kubenswrapper[26425]: I0217 15:41:18.403599 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kng7f\" (UniqueName: \"kubernetes.io/projected/160d346f-1b4e-42fc-b8ea-17d3e9af02f2-kube-api-access-kng7f\") pod \"nmstate-operator-694c9596b7-vbkqw\" (UID: \"160d346f-1b4e-42fc-b8ea-17d3e9af02f2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" Feb 17 15:41:18.485478 master-0 kubenswrapper[26425]: I0217 15:41:18.485345 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" event={"ID":"9b798c84-5e08-40d8-9e97-5dc403c2b31b","Type":"ContainerStarted","Data":"dabbd84409d3759b5cba10b21dfb50e553d29225cceb0f3a82236285c9cb5725"} Feb 17 15:41:18.501827 master-0 kubenswrapper[26425]: I0217 15:41:18.501778 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" Feb 17 15:41:18.959011 master-0 kubenswrapper[26425]: I0217 15:41:18.958956 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-vbkqw"] Feb 17 15:41:18.963124 master-0 kubenswrapper[26425]: W0217 15:41:18.963050 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod160d346f_1b4e_42fc_b8ea_17d3e9af02f2.slice/crio-572735352c360c0fc0a37dca5e7f6567b7c7bb54ac5c3b40182e0b3140f5da6c WatchSource:0}: Error finding container 572735352c360c0fc0a37dca5e7f6567b7c7bb54ac5c3b40182e0b3140f5da6c: Status 404 returned error can't find the container with id 572735352c360c0fc0a37dca5e7f6567b7c7bb54ac5c3b40182e0b3140f5da6c Feb 17 15:41:19.494659 master-0 kubenswrapper[26425]: I0217 15:41:19.494594 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" event={"ID":"160d346f-1b4e-42fc-b8ea-17d3e9af02f2","Type":"ContainerStarted","Data":"572735352c360c0fc0a37dca5e7f6567b7c7bb54ac5c3b40182e0b3140f5da6c"} Feb 17 15:41:23.600497 master-0 kubenswrapper[26425]: I0217 15:41:23.597804 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" event={"ID":"57c8553f-5baa-4bbd-9084-f1cbe139e528","Type":"ContainerStarted","Data":"c913565f5d3028ec75189ebd75ea2ad334756b49803e7410d89e6a21c46a554a"} Feb 17 15:41:23.600497 master-0 kubenswrapper[26425]: I0217 15:41:23.599027 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:23.611105 master-0 kubenswrapper[26425]: I0217 15:41:23.610476 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" event={"ID":"160d346f-1b4e-42fc-b8ea-17d3e9af02f2","Type":"ContainerStarted","Data":"52efe0f1483c97031c6140506f9e79cdc6e5160ec8a54e3b0d93c31bedcee0b4"} Feb 17 15:41:23.617752 master-0 kubenswrapper[26425]: I0217 15:41:23.616598 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" event={"ID":"9b798c84-5e08-40d8-9e97-5dc403c2b31b","Type":"ContainerStarted","Data":"5ae82afcac06d0d8f58e91aa4928a1aa0408c23d1049ce45e13919764045b9d9"} Feb 17 15:41:23.648535 master-0 kubenswrapper[26425]: I0217 15:41:23.642544 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" podStartSLOduration=1.95208132 podStartE2EDuration="8.642526634s" podCreationTimestamp="2026-02-17 15:41:15 +0000 UTC" firstStartedPulling="2026-02-17 15:41:16.500477394 +0000 UTC m=+1538.392201212" lastFinishedPulling="2026-02-17 15:41:23.190922708 +0000 UTC m=+1545.082646526" observedRunningTime="2026-02-17 15:41:23.636315554 +0000 UTC m=+1545.528039372" watchObservedRunningTime="2026-02-17 15:41:23.642526634 +0000 UTC m=+1545.534250452" Feb 17 15:41:23.671475 master-0 kubenswrapper[26425]: I0217 15:41:23.669047 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-62r82" podStartSLOduration=1.93573346 podStartE2EDuration="7.669029222s" podCreationTimestamp="2026-02-17 15:41:16 +0000 UTC" firstStartedPulling="2026-02-17 15:41:17.49560553 +0000 UTC m=+1539.387329348" lastFinishedPulling="2026-02-17 15:41:23.228901292 +0000 UTC m=+1545.120625110" observedRunningTime="2026-02-17 15:41:23.668007578 +0000 UTC m=+1545.559731406" watchObservedRunningTime="2026-02-17 15:41:23.669029222 +0000 UTC m=+1545.560753040" Feb 17 15:41:23.704538 master-0 kubenswrapper[26425]: I0217 15:41:23.704041 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-vbkqw" podStartSLOduration=1.4804134580000001 podStartE2EDuration="5.704022595s" podCreationTimestamp="2026-02-17 15:41:18 +0000 UTC" firstStartedPulling="2026-02-17 15:41:18.967903756 +0000 UTC m=+1540.859627604" lastFinishedPulling="2026-02-17 15:41:23.191512923 +0000 UTC m=+1545.083236741" observedRunningTime="2026-02-17 15:41:23.692740473 +0000 UTC m=+1545.584464291" watchObservedRunningTime="2026-02-17 15:41:23.704022595 +0000 UTC m=+1545.595746413" Feb 17 15:41:23.758341 master-0 kubenswrapper[26425]: I0217 15:41:23.758281 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx"] Feb 17 15:41:23.759420 master-0 kubenswrapper[26425]: I0217 15:41:23.759389 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:23.768137 master-0 kubenswrapper[26425]: I0217 15:41:23.767119 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx"] Feb 17 15:41:23.803175 master-0 kubenswrapper[26425]: I0217 15:41:23.803118 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 15:41:23.803366 master-0 kubenswrapper[26425]: I0217 15:41:23.803183 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 15:41:23.803366 master-0 kubenswrapper[26425]: I0217 15:41:23.803105 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 15:41:23.803366 master-0 kubenswrapper[26425]: I0217 15:41:23.803296 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 15:41:23.905315 master-0 kubenswrapper[26425]: I0217 15:41:23.905252 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-webhook-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:23.905599 master-0 kubenswrapper[26425]: I0217 15:41:23.905344 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przzh\" (UniqueName: \"kubernetes.io/projected/8060b845-0739-43c3-ab09-834965e030d5-kube-api-access-przzh\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:23.905599 master-0 kubenswrapper[26425]: I0217 15:41:23.905436 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-apiservice-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.001079 master-0 kubenswrapper[26425]: I0217 15:41:24.000867 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv"] Feb 17 15:41:24.002361 master-0 kubenswrapper[26425]: I0217 15:41:24.002305 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.004507 master-0 kubenswrapper[26425]: I0217 15:41:24.004409 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 15:41:24.004667 master-0 kubenswrapper[26425]: I0217 15:41:24.004579 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 15:41:24.006519 master-0 kubenswrapper[26425]: I0217 15:41:24.006408 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-apiservice-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.006674 master-0 kubenswrapper[26425]: I0217 15:41:24.006638 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-webhook-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.006831 master-0 kubenswrapper[26425]: I0217 15:41:24.006814 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-przzh\" (UniqueName: \"kubernetes.io/projected/8060b845-0739-43c3-ab09-834965e030d5-kube-api-access-przzh\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.014322 master-0 kubenswrapper[26425]: I0217 15:41:24.014256 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-apiservice-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.017380 master-0 kubenswrapper[26425]: I0217 15:41:24.014944 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8060b845-0739-43c3-ab09-834965e030d5-webhook-cert\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.020583 master-0 kubenswrapper[26425]: I0217 15:41:24.020521 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv"] Feb 17 15:41:24.045821 master-0 kubenswrapper[26425]: I0217 15:41:24.045721 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-przzh\" (UniqueName: \"kubernetes.io/projected/8060b845-0739-43c3-ab09-834965e030d5-kube-api-access-przzh\") pod \"metallb-operator-controller-manager-7f874cc45d-jsprx\" (UID: \"8060b845-0739-43c3-ab09-834965e030d5\") " pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.110410 master-0 kubenswrapper[26425]: I0217 15:41:24.108074 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-apiservice-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.110410 master-0 kubenswrapper[26425]: I0217 15:41:24.108152 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-webhook-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.110410 master-0 kubenswrapper[26425]: I0217 15:41:24.108204 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4b2c\" (UniqueName: \"kubernetes.io/projected/6d4455fa-171b-417d-bf9f-1b0440b74e93-kube-api-access-d4b2c\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.134075 master-0 kubenswrapper[26425]: I0217 15:41:24.133992 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:24.213566 master-0 kubenswrapper[26425]: I0217 15:41:24.213439 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-apiservice-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.213808 master-0 kubenswrapper[26425]: I0217 15:41:24.213599 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-webhook-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.213808 master-0 kubenswrapper[26425]: I0217 15:41:24.213714 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4b2c\" (UniqueName: \"kubernetes.io/projected/6d4455fa-171b-417d-bf9f-1b0440b74e93-kube-api-access-d4b2c\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.221334 master-0 kubenswrapper[26425]: I0217 15:41:24.221240 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-webhook-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.224618 master-0 kubenswrapper[26425]: I0217 15:41:24.224566 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4455fa-171b-417d-bf9f-1b0440b74e93-apiservice-cert\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.233667 master-0 kubenswrapper[26425]: I0217 15:41:24.233413 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4b2c\" (UniqueName: \"kubernetes.io/projected/6d4455fa-171b-417d-bf9f-1b0440b74e93-kube-api-access-d4b2c\") pod \"metallb-operator-webhook-server-7664575c4d-8f7gv\" (UID: \"6d4455fa-171b-417d-bf9f-1b0440b74e93\") " pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.368449 master-0 kubenswrapper[26425]: I0217 15:41:24.368374 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:24.690806 master-0 kubenswrapper[26425]: I0217 15:41:24.690745 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx"] Feb 17 15:41:24.712396 master-0 kubenswrapper[26425]: W0217 15:41:24.711733 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8060b845_0739_43c3_ab09_834965e030d5.slice/crio-34e754090fe639b59edb616efe84f74008c24a224a24c5d3d40f26146e064b80 WatchSource:0}: Error finding container 34e754090fe639b59edb616efe84f74008c24a224a24c5d3d40f26146e064b80: Status 404 returned error can't find the container with id 34e754090fe639b59edb616efe84f74008c24a224a24c5d3d40f26146e064b80 Feb 17 15:41:24.992154 master-0 kubenswrapper[26425]: I0217 15:41:24.992103 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv"] Feb 17 15:41:25.654911 master-0 kubenswrapper[26425]: I0217 15:41:25.654838 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" event={"ID":"6d4455fa-171b-417d-bf9f-1b0440b74e93","Type":"ContainerStarted","Data":"6feb5953067326bcc88e013497075e62f3c6a47d170f07af4d47eb63c40e2e08"} Feb 17 15:41:25.656340 master-0 kubenswrapper[26425]: I0217 15:41:25.656310 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" event={"ID":"8060b845-0739-43c3-ab09-834965e030d5","Type":"ContainerStarted","Data":"34e754090fe639b59edb616efe84f74008c24a224a24c5d3d40f26146e064b80"} Feb 17 15:41:28.277947 master-0 kubenswrapper[26425]: I0217 15:41:28.277897 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-xrzb8"] Feb 17 15:41:28.278985 master-0 kubenswrapper[26425]: I0217 15:41:28.278962 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.332151 master-0 kubenswrapper[26425]: I0217 15:41:28.297520 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xrzb8"] Feb 17 15:41:28.409476 master-0 kubenswrapper[26425]: I0217 15:41:28.401256 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-bound-sa-token\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.409476 master-0 kubenswrapper[26425]: I0217 15:41:28.401339 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49czk\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-kube-api-access-49czk\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.503633 master-0 kubenswrapper[26425]: I0217 15:41:28.503582 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-bound-sa-token\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.503856 master-0 kubenswrapper[26425]: I0217 15:41:28.503670 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49czk\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-kube-api-access-49czk\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.528177 master-0 kubenswrapper[26425]: I0217 15:41:28.526227 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49czk\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-kube-api-access-49czk\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.530161 master-0 kubenswrapper[26425]: I0217 15:41:28.530121 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c55b8988-9c63-49f6-a2c7-a1c1cdc26748-bound-sa-token\") pod \"cert-manager-545d4d4674-xrzb8\" (UID: \"c55b8988-9c63-49f6-a2c7-a1c1cdc26748\") " pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:28.651490 master-0 kubenswrapper[26425]: I0217 15:41:28.651374 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xrzb8" Feb 17 15:41:29.213231 master-0 kubenswrapper[26425]: I0217 15:41:29.213110 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xrzb8"] Feb 17 15:41:30.624498 master-0 kubenswrapper[26425]: W0217 15:41:30.624251 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc55b8988_9c63_49f6_a2c7_a1c1cdc26748.slice/crio-cc77550246904c0317085a1962fe2ac12456e39c83e038c30a9790a06e4fa1c9 WatchSource:0}: Error finding container cc77550246904c0317085a1962fe2ac12456e39c83e038c30a9790a06e4fa1c9: Status 404 returned error can't find the container with id cc77550246904c0317085a1962fe2ac12456e39c83e038c30a9790a06e4fa1c9 Feb 17 15:41:30.718044 master-0 kubenswrapper[26425]: I0217 15:41:30.717988 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xrzb8" event={"ID":"c55b8988-9c63-49f6-a2c7-a1c1cdc26748","Type":"ContainerStarted","Data":"cc77550246904c0317085a1962fe2ac12456e39c83e038c30a9790a06e4fa1c9"} Feb 17 15:41:31.053427 master-0 kubenswrapper[26425]: I0217 15:41:31.053335 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-d6jf7" Feb 17 15:41:31.746479 master-0 kubenswrapper[26425]: I0217 15:41:31.742836 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xrzb8" event={"ID":"c55b8988-9c63-49f6-a2c7-a1c1cdc26748","Type":"ContainerStarted","Data":"c6e933f431820cfd686e183c913c22dfea3084b733cf5cf8f61e0a737afb41dd"} Feb 17 15:41:31.746479 master-0 kubenswrapper[26425]: I0217 15:41:31.745191 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" event={"ID":"8060b845-0739-43c3-ab09-834965e030d5","Type":"ContainerStarted","Data":"8961b3931e7be790cd5cd5377828963ea1f421ca3a17d50cbd61bdf0a5599f9d"} Feb 17 15:41:31.746479 master-0 kubenswrapper[26425]: I0217 15:41:31.746328 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:41:31.756488 master-0 kubenswrapper[26425]: I0217 15:41:31.752914 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" event={"ID":"6d4455fa-171b-417d-bf9f-1b0440b74e93","Type":"ContainerStarted","Data":"56b072248cb3868baf3fd49055f30a1bb27301a3dfb7aa8e5665a7feda8bf88e"} Feb 17 15:41:31.756488 master-0 kubenswrapper[26425]: I0217 15:41:31.753620 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:31.776486 master-0 kubenswrapper[26425]: I0217 15:41:31.775714 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-xrzb8" podStartSLOduration=3.775688292 podStartE2EDuration="3.775688292s" podCreationTimestamp="2026-02-17 15:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:41:31.771639815 +0000 UTC m=+1553.663363643" watchObservedRunningTime="2026-02-17 15:41:31.775688292 +0000 UTC m=+1553.667412130" Feb 17 15:41:31.830477 master-0 kubenswrapper[26425]: I0217 15:41:31.830165 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" podStartSLOduration=3.111193886 podStartE2EDuration="8.830150224s" podCreationTimestamp="2026-02-17 15:41:23 +0000 UTC" firstStartedPulling="2026-02-17 15:41:25.000008655 +0000 UTC m=+1546.891732483" lastFinishedPulling="2026-02-17 15:41:30.718965003 +0000 UTC m=+1552.610688821" observedRunningTime="2026-02-17 15:41:31.830080523 +0000 UTC m=+1553.721804351" watchObservedRunningTime="2026-02-17 15:41:31.830150224 +0000 UTC m=+1553.721874042" Feb 17 15:41:32.040143 master-0 kubenswrapper[26425]: I0217 15:41:32.039960 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" podStartSLOduration=3.085147408 podStartE2EDuration="9.039934615s" podCreationTimestamp="2026-02-17 15:41:23 +0000 UTC" firstStartedPulling="2026-02-17 15:41:24.73440862 +0000 UTC m=+1546.626132438" lastFinishedPulling="2026-02-17 15:41:30.689195817 +0000 UTC m=+1552.580919645" observedRunningTime="2026-02-17 15:41:32.028438429 +0000 UTC m=+1553.920162287" watchObservedRunningTime="2026-02-17 15:41:32.039934615 +0000 UTC m=+1553.931658453" Feb 17 15:41:33.949055 master-0 kubenswrapper[26425]: I0217 15:41:33.948977 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8"] Feb 17 15:41:33.950575 master-0 kubenswrapper[26425]: I0217 15:41:33.950546 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" Feb 17 15:41:33.952290 master-0 kubenswrapper[26425]: I0217 15:41:33.952246 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 15:41:33.955019 master-0 kubenswrapper[26425]: I0217 15:41:33.953555 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 15:41:33.966841 master-0 kubenswrapper[26425]: I0217 15:41:33.966789 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8"] Feb 17 15:41:34.013999 master-0 kubenswrapper[26425]: I0217 15:41:34.013934 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxhr\" (UniqueName: \"kubernetes.io/projected/82305353-07d7-4127-b7a4-dcf94ae19b80-kube-api-access-qtxhr\") pod \"obo-prometheus-operator-68bc856cb9-5tqc8\" (UID: \"82305353-07d7-4127-b7a4-dcf94ae19b80\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" Feb 17 15:41:34.100643 master-0 kubenswrapper[26425]: I0217 15:41:34.100577 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7"] Feb 17 15:41:34.104390 master-0 kubenswrapper[26425]: I0217 15:41:34.102410 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.105738 master-0 kubenswrapper[26425]: I0217 15:41:34.105643 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 15:41:34.117219 master-0 kubenswrapper[26425]: I0217 15:41:34.117171 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtxhr\" (UniqueName: \"kubernetes.io/projected/82305353-07d7-4127-b7a4-dcf94ae19b80-kube-api-access-qtxhr\") pod \"obo-prometheus-operator-68bc856cb9-5tqc8\" (UID: \"82305353-07d7-4127-b7a4-dcf94ae19b80\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" Feb 17 15:41:34.121576 master-0 kubenswrapper[26425]: I0217 15:41:34.121523 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s"] Feb 17 15:41:34.122776 master-0 kubenswrapper[26425]: I0217 15:41:34.122747 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.151576 master-0 kubenswrapper[26425]: I0217 15:41:34.145982 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtxhr\" (UniqueName: \"kubernetes.io/projected/82305353-07d7-4127-b7a4-dcf94ae19b80-kube-api-access-qtxhr\") pod \"obo-prometheus-operator-68bc856cb9-5tqc8\" (UID: \"82305353-07d7-4127-b7a4-dcf94ae19b80\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" Feb 17 15:41:34.151576 master-0 kubenswrapper[26425]: I0217 15:41:34.151045 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7"] Feb 17 15:41:34.210793 master-0 kubenswrapper[26425]: I0217 15:41:34.210278 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s"] Feb 17 15:41:34.219191 master-0 kubenswrapper[26425]: I0217 15:41:34.218328 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.219191 master-0 kubenswrapper[26425]: I0217 15:41:34.218402 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.219191 master-0 kubenswrapper[26425]: I0217 15:41:34.218624 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.219191 master-0 kubenswrapper[26425]: I0217 15:41:34.218793 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.265041 master-0 kubenswrapper[26425]: I0217 15:41:34.264985 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" Feb 17 15:41:34.294210 master-0 kubenswrapper[26425]: I0217 15:41:34.294162 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d8nkj"] Feb 17 15:41:34.295749 master-0 kubenswrapper[26425]: I0217 15:41:34.295237 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.297129 master-0 kubenswrapper[26425]: I0217 15:41:34.297079 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 15:41:34.318891 master-0 kubenswrapper[26425]: I0217 15:41:34.314021 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d8nkj"] Feb 17 15:41:34.323773 master-0 kubenswrapper[26425]: I0217 15:41:34.323527 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.323773 master-0 kubenswrapper[26425]: I0217 15:41:34.323608 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzn9\" (UniqueName: \"kubernetes.io/projected/52657988-3ac3-4919-b3c5-2be2b87204d4-kube-api-access-7hzn9\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.323773 master-0 kubenswrapper[26425]: I0217 15:41:34.323650 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.323773 master-0 kubenswrapper[26425]: I0217 15:41:34.323717 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.324050 master-0 kubenswrapper[26425]: I0217 15:41:34.323781 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.324050 master-0 kubenswrapper[26425]: I0217 15:41:34.323805 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52657988-3ac3-4919-b3c5-2be2b87204d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.326679 master-0 kubenswrapper[26425]: I0217 15:41:34.326654 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.335301 master-0 kubenswrapper[26425]: I0217 15:41:34.327247 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b7bcaa8-5073-45b5-a3f0-ccc9938c954a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s\" (UID: \"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.335301 master-0 kubenswrapper[26425]: I0217 15:41:34.327861 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.335301 master-0 kubenswrapper[26425]: I0217 15:41:34.328547 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9eae63d-2a2a-4946-b110-ad39f40e6a12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7\" (UID: \"b9eae63d-2a2a-4946-b110-ad39f40e6a12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.429925 master-0 kubenswrapper[26425]: I0217 15:41:34.428258 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52657988-3ac3-4919-b3c5-2be2b87204d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.429925 master-0 kubenswrapper[26425]: I0217 15:41:34.428335 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzn9\" (UniqueName: \"kubernetes.io/projected/52657988-3ac3-4919-b3c5-2be2b87204d4-kube-api-access-7hzn9\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.432440 master-0 kubenswrapper[26425]: I0217 15:41:34.432395 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52657988-3ac3-4919-b3c5-2be2b87204d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.485478 master-0 kubenswrapper[26425]: I0217 15:41:34.485360 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" Feb 17 15:41:34.498338 master-0 kubenswrapper[26425]: I0217 15:41:34.498277 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tw9pm"] Feb 17 15:41:34.501658 master-0 kubenswrapper[26425]: I0217 15:41:34.500480 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.505378 master-0 kubenswrapper[26425]: I0217 15:41:34.505342 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzn9\" (UniqueName: \"kubernetes.io/projected/52657988-3ac3-4919-b3c5-2be2b87204d4-kube-api-access-7hzn9\") pod \"observability-operator-59bdc8b94-d8nkj\" (UID: \"52657988-3ac3-4919-b3c5-2be2b87204d4\") " pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.520076 master-0 kubenswrapper[26425]: I0217 15:41:34.520037 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" Feb 17 15:41:34.529561 master-0 kubenswrapper[26425]: I0217 15:41:34.529523 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkv6\" (UniqueName: \"kubernetes.io/projected/0db553e9-b82a-49ad-81ad-db1a95bbc63a-kube-api-access-gpkv6\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.529741 master-0 kubenswrapper[26425]: I0217 15:41:34.529586 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0db553e9-b82a-49ad-81ad-db1a95bbc63a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.540289 master-0 kubenswrapper[26425]: I0217 15:41:34.532515 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tw9pm"] Feb 17 15:41:34.630525 master-0 kubenswrapper[26425]: I0217 15:41:34.630441 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpkv6\" (UniqueName: \"kubernetes.io/projected/0db553e9-b82a-49ad-81ad-db1a95bbc63a-kube-api-access-gpkv6\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.632247 master-0 kubenswrapper[26425]: I0217 15:41:34.630538 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0db553e9-b82a-49ad-81ad-db1a95bbc63a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.633018 master-0 kubenswrapper[26425]: I0217 15:41:34.632594 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0db553e9-b82a-49ad-81ad-db1a95bbc63a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.650121 master-0 kubenswrapper[26425]: I0217 15:41:34.650066 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpkv6\" (UniqueName: \"kubernetes.io/projected/0db553e9-b82a-49ad-81ad-db1a95bbc63a-kube-api-access-gpkv6\") pod \"perses-operator-5bf474d74f-tw9pm\" (UID: \"0db553e9-b82a-49ad-81ad-db1a95bbc63a\") " pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.693443 master-0 kubenswrapper[26425]: I0217 15:41:34.693369 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:34.812624 master-0 kubenswrapper[26425]: I0217 15:41:34.810054 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8"] Feb 17 15:41:34.837089 master-0 kubenswrapper[26425]: I0217 15:41:34.837042 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:34.837553 master-0 kubenswrapper[26425]: W0217 15:41:34.837423 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82305353_07d7_4127_b7a4_dcf94ae19b80.slice/crio-5e9f98ba9e0c7d466574caf77dbc1d105ecf635f76c22e428c5617730f1c49cc WatchSource:0}: Error finding container 5e9f98ba9e0c7d466574caf77dbc1d105ecf635f76c22e428c5617730f1c49cc: Status 404 returned error can't find the container with id 5e9f98ba9e0c7d466574caf77dbc1d105ecf635f76c22e428c5617730f1c49cc Feb 17 15:41:35.070133 master-0 kubenswrapper[26425]: W0217 15:41:35.064220 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9eae63d_2a2a_4946_b110_ad39f40e6a12.slice/crio-065f5e13c623f8057b884ff4b7b1ad02fbf8530d185b0fe7e1a8873e72b6b7a7 WatchSource:0}: Error finding container 065f5e13c623f8057b884ff4b7b1ad02fbf8530d185b0fe7e1a8873e72b6b7a7: Status 404 returned error can't find the container with id 065f5e13c623f8057b884ff4b7b1ad02fbf8530d185b0fe7e1a8873e72b6b7a7 Feb 17 15:41:35.070133 master-0 kubenswrapper[26425]: I0217 15:41:35.069593 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7"] Feb 17 15:41:35.095086 master-0 kubenswrapper[26425]: I0217 15:41:35.093509 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s"] Feb 17 15:41:35.257640 master-0 kubenswrapper[26425]: I0217 15:41:35.257598 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d8nkj"] Feb 17 15:41:35.316077 master-0 kubenswrapper[26425]: I0217 15:41:35.316022 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tw9pm"] Feb 17 15:41:35.319186 master-0 kubenswrapper[26425]: W0217 15:41:35.319147 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db553e9_b82a_49ad_81ad_db1a95bbc63a.slice/crio-4b46396db27d0a1e8edd79f47064a2fcdb9d1e135c50201bbf674c9ac66da4a5 WatchSource:0}: Error finding container 4b46396db27d0a1e8edd79f47064a2fcdb9d1e135c50201bbf674c9ac66da4a5: Status 404 returned error can't find the container with id 4b46396db27d0a1e8edd79f47064a2fcdb9d1e135c50201bbf674c9ac66da4a5 Feb 17 15:41:35.817747 master-0 kubenswrapper[26425]: I0217 15:41:35.817701 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" event={"ID":"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a","Type":"ContainerStarted","Data":"d5425b6ee1dd9d8e1c8d3e2e08b0926314b2b495a30d71e73e2f9513c550f6ba"} Feb 17 15:41:35.819614 master-0 kubenswrapper[26425]: I0217 15:41:35.819558 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" event={"ID":"0db553e9-b82a-49ad-81ad-db1a95bbc63a","Type":"ContainerStarted","Data":"4b46396db27d0a1e8edd79f47064a2fcdb9d1e135c50201bbf674c9ac66da4a5"} Feb 17 15:41:35.823277 master-0 kubenswrapper[26425]: I0217 15:41:35.823179 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" event={"ID":"52657988-3ac3-4919-b3c5-2be2b87204d4","Type":"ContainerStarted","Data":"cfa2f9ab5f0fb0a6f517ae7e729154069b7d34b9705df5b58944037befa23f2d"} Feb 17 15:41:35.825217 master-0 kubenswrapper[26425]: I0217 15:41:35.825178 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" event={"ID":"82305353-07d7-4127-b7a4-dcf94ae19b80","Type":"ContainerStarted","Data":"5e9f98ba9e0c7d466574caf77dbc1d105ecf635f76c22e428c5617730f1c49cc"} Feb 17 15:41:35.826688 master-0 kubenswrapper[26425]: I0217 15:41:35.826347 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" event={"ID":"b9eae63d-2a2a-4946-b110-ad39f40e6a12","Type":"ContainerStarted","Data":"065f5e13c623f8057b884ff4b7b1ad02fbf8530d185b0fe7e1a8873e72b6b7a7"} Feb 17 15:41:41.122202 master-0 kubenswrapper[26425]: I0217 15:41:41.122129 26425 scope.go:117] "RemoveContainer" containerID="e9aecde5e6438f850dbad5ae273e3c99bc8982f855499ceec4aa52f9bb199b51" Feb 17 15:41:44.375601 master-0 kubenswrapper[26425]: I0217 15:41:44.375548 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv" Feb 17 15:41:46.103484 master-0 kubenswrapper[26425]: I0217 15:41:46.102102 26425 scope.go:117] "RemoveContainer" containerID="d7c12fb1b92d28ef7ba81926d7b090d49d50669135d83d19da43eab3563fbe49" Feb 17 15:41:46.773880 master-0 kubenswrapper[26425]: I0217 15:41:46.773837 26425 scope.go:117] "RemoveContainer" containerID="7bd7a427fdfea568f9e25f8ac1dfa94717d2fe4a7b16f61327856994d3fecf37" Feb 17 15:41:47.996253 master-0 kubenswrapper[26425]: I0217 15:41:47.995869 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" event={"ID":"0db553e9-b82a-49ad-81ad-db1a95bbc63a","Type":"ContainerStarted","Data":"1e71d8ab2334dc4f42be14e09482f7ba8e4b8ca394439e1c4ee294f309737eab"} Feb 17 15:41:47.996253 master-0 kubenswrapper[26425]: I0217 15:41:47.995979 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:41:47.997719 master-0 kubenswrapper[26425]: I0217 15:41:47.997587 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" event={"ID":"52657988-3ac3-4919-b3c5-2be2b87204d4","Type":"ContainerStarted","Data":"fd3aaff21f57e44cc7eae2283e3122b11a1d1caca402fe39b65c2e4b45d72054"} Feb 17 15:41:47.998011 master-0 kubenswrapper[26425]: I0217 15:41:47.997940 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:48.002325 master-0 kubenswrapper[26425]: I0217 15:41:48.001200 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" event={"ID":"82305353-07d7-4127-b7a4-dcf94ae19b80","Type":"ContainerStarted","Data":"3bdef0265a03598c4a7386ecf3b82f55dadd533f84178d42e88eac457a3295bf"} Feb 17 15:41:48.008483 master-0 kubenswrapper[26425]: I0217 15:41:48.002283 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" Feb 17 15:41:48.012268 master-0 kubenswrapper[26425]: I0217 15:41:48.012224 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" event={"ID":"b9eae63d-2a2a-4946-b110-ad39f40e6a12","Type":"ContainerStarted","Data":"0872182e920a4794b2389fcf1b61b2db4f579467dc1ebfc2d1f87018b4124bdc"} Feb 17 15:41:48.015109 master-0 kubenswrapper[26425]: I0217 15:41:48.015007 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" event={"ID":"9b7bcaa8-5073-45b5-a3f0-ccc9938c954a","Type":"ContainerStarted","Data":"7ba79ad518c40c521d6d31ad50deb17a099e9c6ece5b6c82ff93fc4f72bcfc56"} Feb 17 15:41:48.026151 master-0 kubenswrapper[26425]: I0217 15:41:48.024852 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" podStartSLOduration=2.452692416 podStartE2EDuration="14.024834585s" podCreationTimestamp="2026-02-17 15:41:34 +0000 UTC" firstStartedPulling="2026-02-17 15:41:35.323658817 +0000 UTC m=+1557.215382635" lastFinishedPulling="2026-02-17 15:41:46.895800976 +0000 UTC m=+1568.787524804" observedRunningTime="2026-02-17 15:41:48.023958884 +0000 UTC m=+1569.915682742" watchObservedRunningTime="2026-02-17 15:41:48.024834585 +0000 UTC m=+1569.916558403" Feb 17 15:41:48.051504 master-0 kubenswrapper[26425]: I0217 15:41:48.049618 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-d8nkj" podStartSLOduration=2.396976966 podStartE2EDuration="14.049591532s" podCreationTimestamp="2026-02-17 15:41:34 +0000 UTC" firstStartedPulling="2026-02-17 15:41:35.26109346 +0000 UTC m=+1557.152817278" lastFinishedPulling="2026-02-17 15:41:46.913708016 +0000 UTC m=+1568.805431844" observedRunningTime="2026-02-17 15:41:48.048162898 +0000 UTC m=+1569.939886756" watchObservedRunningTime="2026-02-17 15:41:48.049591532 +0000 UTC m=+1569.941315370" Feb 17 15:41:48.091989 master-0 kubenswrapper[26425]: I0217 15:41:48.091123 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8" podStartSLOduration=3.02843337 podStartE2EDuration="15.091080371s" podCreationTimestamp="2026-02-17 15:41:33 +0000 UTC" firstStartedPulling="2026-02-17 15:41:34.852193713 +0000 UTC m=+1556.743917531" lastFinishedPulling="2026-02-17 15:41:46.914840714 +0000 UTC m=+1568.806564532" observedRunningTime="2026-02-17 15:41:48.079338588 +0000 UTC m=+1569.971062466" watchObservedRunningTime="2026-02-17 15:41:48.091080371 +0000 UTC m=+1569.982804189" Feb 17 15:41:48.114326 master-0 kubenswrapper[26425]: I0217 15:41:48.113157 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s" podStartSLOduration=2.301152257 podStartE2EDuration="14.113139262s" podCreationTimestamp="2026-02-17 15:41:34 +0000 UTC" firstStartedPulling="2026-02-17 15:41:35.112063111 +0000 UTC m=+1557.003786939" lastFinishedPulling="2026-02-17 15:41:46.924050126 +0000 UTC m=+1568.815773944" observedRunningTime="2026-02-17 15:41:48.105502289 +0000 UTC m=+1569.997226127" watchObservedRunningTime="2026-02-17 15:41:48.113139262 +0000 UTC m=+1570.004863080" Feb 17 15:41:48.145491 master-0 kubenswrapper[26425]: I0217 15:41:48.144613 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7" podStartSLOduration=2.222868492 podStartE2EDuration="14.14459618s" podCreationTimestamp="2026-02-17 15:41:34 +0000 UTC" firstStartedPulling="2026-02-17 15:41:35.071128155 +0000 UTC m=+1556.962851973" lastFinishedPulling="2026-02-17 15:41:46.992855803 +0000 UTC m=+1568.884579661" observedRunningTime="2026-02-17 15:41:48.140612414 +0000 UTC m=+1570.032336222" watchObservedRunningTime="2026-02-17 15:41:48.14459618 +0000 UTC m=+1570.036319998" Feb 17 15:41:54.840054 master-0 kubenswrapper[26425]: I0217 15:41:54.839986 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tw9pm" Feb 17 15:42:04.140210 master-0 kubenswrapper[26425]: I0217 15:42:04.140146 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx" Feb 17 15:42:12.812300 master-0 kubenswrapper[26425]: I0217 15:42:12.805387 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls"] Feb 17 15:42:12.812300 master-0 kubenswrapper[26425]: I0217 15:42:12.806735 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.812300 master-0 kubenswrapper[26425]: I0217 15:42:12.809722 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 15:42:12.815799 master-0 kubenswrapper[26425]: I0217 15:42:12.815430 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t5g7s"] Feb 17 15:42:12.819674 master-0 kubenswrapper[26425]: I0217 15:42:12.819626 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.823889 master-0 kubenswrapper[26425]: I0217 15:42:12.823836 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 15:42:12.824094 master-0 kubenswrapper[26425]: I0217 15:42:12.823926 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls"] Feb 17 15:42:12.824142 master-0 kubenswrapper[26425]: I0217 15:42:12.824102 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877584 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877654 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-conf\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877696 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-sockets\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877761 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-reloader\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877827 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-metrics\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877858 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmp9\" (UniqueName: \"kubernetes.io/projected/25008115-e70b-470c-892c-02ce884bb721-kube-api-access-cqmp9\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877881 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25008115-e70b-470c-892c-02ce884bb721-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877909 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/acbd5225-9914-47ea-a945-c4e425c734c2-frr-startup\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.878477 master-0 kubenswrapper[26425]: I0217 15:42:12.877933 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq68p\" (UniqueName: \"kubernetes.io/projected/acbd5225-9914-47ea-a945-c4e425c734c2-kube-api-access-mq68p\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.904930 master-0 kubenswrapper[26425]: I0217 15:42:12.904881 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-mj82t"] Feb 17 15:42:12.906796 master-0 kubenswrapper[26425]: I0217 15:42:12.906767 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mj82t" Feb 17 15:42:12.909301 master-0 kubenswrapper[26425]: I0217 15:42:12.909194 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 15:42:12.910242 master-0 kubenswrapper[26425]: I0217 15:42:12.909606 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 15:42:12.910382 master-0 kubenswrapper[26425]: I0217 15:42:12.910349 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 15:42:12.926926 master-0 kubenswrapper[26425]: I0217 15:42:12.926871 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-8w79x"] Feb 17 15:42:12.929314 master-0 kubenswrapper[26425]: I0217 15:42:12.929277 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:12.935086 master-0 kubenswrapper[26425]: I0217 15:42:12.932471 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 15:42:12.980193 master-0 kubenswrapper[26425]: I0217 15:42:12.980082 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980433 master-0 kubenswrapper[26425]: I0217 15:42:12.980298 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:12.980433 master-0 kubenswrapper[26425]: I0217 15:42:12.980336 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-conf\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980433 master-0 kubenswrapper[26425]: I0217 15:42:12.980371 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-metrics-certs\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:12.980433 master-0 kubenswrapper[26425]: I0217 15:42:12.980403 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-sockets\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980587 master-0 kubenswrapper[26425]: I0217 15:42:12.980476 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-reloader\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980587 master-0 kubenswrapper[26425]: I0217 15:42:12.980523 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-cert\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:12.980587 master-0 kubenswrapper[26425]: I0217 15:42:12.980566 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d46593c5-a748-40e4-aac9-e8123c3e024e-metallb-excludel2\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:12.980693 master-0 kubenswrapper[26425]: I0217 15:42:12.980604 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-metrics\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980693 master-0 kubenswrapper[26425]: I0217 15:42:12.980626 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:12.980693 master-0 kubenswrapper[26425]: I0217 15:42:12.980657 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmp9\" (UniqueName: \"kubernetes.io/projected/25008115-e70b-470c-892c-02ce884bb721-kube-api-access-cqmp9\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.980693 master-0 kubenswrapper[26425]: I0217 15:42:12.980686 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25008115-e70b-470c-892c-02ce884bb721-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.980873 master-0 kubenswrapper[26425]: I0217 15:42:12.980711 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lzw\" (UniqueName: \"kubernetes.io/projected/c38bfe84-f8b8-42e0-b46e-a4f03db76894-kube-api-access-b8lzw\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:12.980873 master-0 kubenswrapper[26425]: I0217 15:42:12.980743 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq68p\" (UniqueName: \"kubernetes.io/projected/acbd5225-9914-47ea-a945-c4e425c734c2-kube-api-access-mq68p\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980873 master-0 kubenswrapper[26425]: I0217 15:42:12.980765 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/acbd5225-9914-47ea-a945-c4e425c734c2-frr-startup\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.980873 master-0 kubenswrapper[26425]: I0217 15:42:12.980789 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zd8w\" (UniqueName: \"kubernetes.io/projected/d46593c5-a748-40e4-aac9-e8123c3e024e-kube-api-access-6zd8w\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:12.981004 master-0 kubenswrapper[26425]: E0217 15:42:12.980969 26425 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 17 15:42:12.981039 master-0 kubenswrapper[26425]: E0217 15:42:12.981025 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs podName:acbd5225-9914-47ea-a945-c4e425c734c2 nodeName:}" failed. No retries permitted until 2026-02-17 15:42:13.481006607 +0000 UTC m=+1595.372730425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs") pod "frr-k8s-t5g7s" (UID: "acbd5225-9914-47ea-a945-c4e425c734c2") : secret "frr-k8s-certs-secret" not found Feb 17 15:42:12.981777 master-0 kubenswrapper[26425]: I0217 15:42:12.981701 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-conf\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.981976 master-0 kubenswrapper[26425]: I0217 15:42:12.981952 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-frr-sockets\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.982742 master-0 kubenswrapper[26425]: I0217 15:42:12.982188 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-reloader\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.983138 master-0 kubenswrapper[26425]: I0217 15:42:12.983118 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/acbd5225-9914-47ea-a945-c4e425c734c2-metrics\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.984282 master-0 kubenswrapper[26425]: I0217 15:42:12.984260 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/acbd5225-9914-47ea-a945-c4e425c734c2-frr-startup\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:12.985426 master-0 kubenswrapper[26425]: I0217 15:42:12.985393 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25008115-e70b-470c-892c-02ce884bb721-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:12.996154 master-0 kubenswrapper[26425]: I0217 15:42:12.996105 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8w79x"] Feb 17 15:42:13.017906 master-0 kubenswrapper[26425]: I0217 15:42:13.017854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq68p\" (UniqueName: \"kubernetes.io/projected/acbd5225-9914-47ea-a945-c4e425c734c2-kube-api-access-mq68p\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:13.033082 master-0 kubenswrapper[26425]: I0217 15:42:13.033021 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmp9\" (UniqueName: \"kubernetes.io/projected/25008115-e70b-470c-892c-02ce884bb721-kube-api-access-cqmp9\") pod \"frr-k8s-webhook-server-78b44bf5bb-x52ls\" (UID: \"25008115-e70b-470c-892c-02ce884bb721\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.082166 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-cert\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.082236 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d46593c5-a748-40e4-aac9-e8123c3e024e-metallb-excludel2\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.082494 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: E0217 15:42:13.082670 26425 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: E0217 15:42:13.082872 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs podName:c38bfe84-f8b8-42e0-b46e-a4f03db76894 nodeName:}" failed. No retries permitted until 2026-02-17 15:42:13.58284691 +0000 UTC m=+1595.474570728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs") pod "controller-69bbfbf88f-8w79x" (UID: "c38bfe84-f8b8-42e0-b46e-a4f03db76894") : secret "controller-certs-secret" not found Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.083175 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lzw\" (UniqueName: \"kubernetes.io/projected/c38bfe84-f8b8-42e0-b46e-a4f03db76894-kube-api-access-b8lzw\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.083224 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d46593c5-a748-40e4-aac9-e8123c3e024e-metallb-excludel2\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.083470 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zd8w\" (UniqueName: \"kubernetes.io/projected/d46593c5-a748-40e4-aac9-e8123c3e024e-kube-api-access-6zd8w\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.083665 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.083738 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-metrics-certs\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: I0217 15:42:13.084148 26425 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: E0217 15:42:13.084186 26425 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 15:42:13.084270 master-0 kubenswrapper[26425]: E0217 15:42:13.084233 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist podName:d46593c5-a748-40e4-aac9-e8123c3e024e nodeName:}" failed. No retries permitted until 2026-02-17 15:42:13.584217343 +0000 UTC m=+1595.475941161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist") pod "speaker-mj82t" (UID: "d46593c5-a748-40e4-aac9-e8123c3e024e") : secret "metallb-memberlist" not found Feb 17 15:42:13.087392 master-0 kubenswrapper[26425]: I0217 15:42:13.087342 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-metrics-certs\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.099521 master-0 kubenswrapper[26425]: I0217 15:42:13.098748 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-cert\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.107134 master-0 kubenswrapper[26425]: I0217 15:42:13.102616 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zd8w\" (UniqueName: \"kubernetes.io/projected/d46593c5-a748-40e4-aac9-e8123c3e024e-kube-api-access-6zd8w\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.119485 master-0 kubenswrapper[26425]: I0217 15:42:13.116272 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lzw\" (UniqueName: \"kubernetes.io/projected/c38bfe84-f8b8-42e0-b46e-a4f03db76894-kube-api-access-b8lzw\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.132137 master-0 kubenswrapper[26425]: I0217 15:42:13.132079 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:13.490416 master-0 kubenswrapper[26425]: I0217 15:42:13.490330 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:13.495914 master-0 kubenswrapper[26425]: I0217 15:42:13.495816 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/acbd5225-9914-47ea-a945-c4e425c734c2-metrics-certs\") pod \"frr-k8s-t5g7s\" (UID: \"acbd5225-9914-47ea-a945-c4e425c734c2\") " pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:13.592769 master-0 kubenswrapper[26425]: I0217 15:42:13.592690 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.592972 master-0 kubenswrapper[26425]: I0217 15:42:13.592904 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:13.593380 master-0 kubenswrapper[26425]: E0217 15:42:13.593332 26425 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 15:42:13.593693 master-0 kubenswrapper[26425]: E0217 15:42:13.593432 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist podName:d46593c5-a748-40e4-aac9-e8123c3e024e nodeName:}" failed. No retries permitted until 2026-02-17 15:42:14.593406906 +0000 UTC m=+1596.485130764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist") pod "speaker-mj82t" (UID: "d46593c5-a748-40e4-aac9-e8123c3e024e") : secret "metallb-memberlist" not found Feb 17 15:42:13.597961 master-0 kubenswrapper[26425]: I0217 15:42:13.597850 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c38bfe84-f8b8-42e0-b46e-a4f03db76894-metrics-certs\") pod \"controller-69bbfbf88f-8w79x\" (UID: \"c38bfe84-f8b8-42e0-b46e-a4f03db76894\") " pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:13.605295 master-0 kubenswrapper[26425]: W0217 15:42:13.605221 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25008115_e70b_470c_892c_02ce884bb721.slice/crio-7a5a7f9022297be7686dea1e480e1f1f99b7dac35cbdab2adc972cef38067925 WatchSource:0}: Error finding container 7a5a7f9022297be7686dea1e480e1f1f99b7dac35cbdab2adc972cef38067925: Status 404 returned error can't find the container with id 7a5a7f9022297be7686dea1e480e1f1f99b7dac35cbdab2adc972cef38067925 Feb 17 15:42:13.607082 master-0 kubenswrapper[26425]: I0217 15:42:13.606974 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls"] Feb 17 15:42:13.748587 master-0 kubenswrapper[26425]: I0217 15:42:13.748375 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:13.855080 master-0 kubenswrapper[26425]: I0217 15:42:13.855000 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:14.294841 master-0 kubenswrapper[26425]: I0217 15:42:14.294642 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8w79x"] Feb 17 15:42:14.310145 master-0 kubenswrapper[26425]: I0217 15:42:14.308487 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"c12039aa88f297175e4ef7def72dbc5a671680fac82eae17a433064741d4f0c7"} Feb 17 15:42:14.310145 master-0 kubenswrapper[26425]: I0217 15:42:14.310014 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" event={"ID":"25008115-e70b-470c-892c-02ce884bb721","Type":"ContainerStarted","Data":"7a5a7f9022297be7686dea1e480e1f1f99b7dac35cbdab2adc972cef38067925"} Feb 17 15:42:14.612863 master-0 kubenswrapper[26425]: I0217 15:42:14.612811 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:14.613552 master-0 kubenswrapper[26425]: E0217 15:42:14.613330 26425 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 15:42:14.613681 master-0 kubenswrapper[26425]: E0217 15:42:14.613670 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist podName:d46593c5-a748-40e4-aac9-e8123c3e024e nodeName:}" failed. No retries permitted until 2026-02-17 15:42:16.613654386 +0000 UTC m=+1598.505378204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist") pod "speaker-mj82t" (UID: "d46593c5-a748-40e4-aac9-e8123c3e024e") : secret "metallb-memberlist" not found Feb 17 15:42:15.027654 master-0 kubenswrapper[26425]: I0217 15:42:15.027578 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb"] Feb 17 15:42:15.041331 master-0 kubenswrapper[26425]: I0217 15:42:15.037826 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" Feb 17 15:42:15.065513 master-0 kubenswrapper[26425]: I0217 15:42:15.064007 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf"] Feb 17 15:42:15.068745 master-0 kubenswrapper[26425]: I0217 15:42:15.068703 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.093906 master-0 kubenswrapper[26425]: I0217 15:42:15.072757 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 15:42:15.093906 master-0 kubenswrapper[26425]: I0217 15:42:15.086485 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb"] Feb 17 15:42:15.108570 master-0 kubenswrapper[26425]: I0217 15:42:15.107930 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf"] Feb 17 15:42:15.126666 master-0 kubenswrapper[26425]: I0217 15:42:15.117664 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-44nvt"] Feb 17 15:42:15.126666 master-0 kubenswrapper[26425]: I0217 15:42:15.118941 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.135731 master-0 kubenswrapper[26425]: I0217 15:42:15.135695 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxqf\" (UniqueName: \"kubernetes.io/projected/28cff934-38e3-467a-9139-9a24c16b96d2-kube-api-access-ksxqf\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.139897 master-0 kubenswrapper[26425]: I0217 15:42:15.138438 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.139897 master-0 kubenswrapper[26425]: I0217 15:42:15.138534 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnchg\" (UniqueName: \"kubernetes.io/projected/3bee787b-9b99-4abc-bc71-c1104a90c3f0-kube-api-access-xnchg\") pod \"nmstate-metrics-58c85c668d-xtbrb\" (UID: \"3bee787b-9b99-4abc-bc71-c1104a90c3f0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" Feb 17 15:42:15.210694 master-0 kubenswrapper[26425]: I0217 15:42:15.209332 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb"] Feb 17 15:42:15.210909 master-0 kubenswrapper[26425]: I0217 15:42:15.210766 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.214395 master-0 kubenswrapper[26425]: I0217 15:42:15.214355 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 15:42:15.214967 master-0 kubenswrapper[26425]: I0217 15:42:15.214369 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 15:42:15.229791 master-0 kubenswrapper[26425]: I0217 15:42:15.229747 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb"] Feb 17 15:42:15.240194 master-0 kubenswrapper[26425]: I0217 15:42:15.240093 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrr7\" (UniqueName: \"kubernetes.io/projected/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-kube-api-access-ncrr7\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.240412 master-0 kubenswrapper[26425]: I0217 15:42:15.240270 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b60361e4-a40a-407d-a481-e6549424e165-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.240412 master-0 kubenswrapper[26425]: I0217 15:42:15.240337 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-nmstate-lock\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.240412 master-0 kubenswrapper[26425]: I0217 15:42:15.240369 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-ovs-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.240672 master-0 kubenswrapper[26425]: I0217 15:42:15.240444 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b60361e4-a40a-407d-a481-e6549424e165-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.240672 master-0 kubenswrapper[26425]: I0217 15:42:15.240514 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxqf\" (UniqueName: \"kubernetes.io/projected/28cff934-38e3-467a-9139-9a24c16b96d2-kube-api-access-ksxqf\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.240672 master-0 kubenswrapper[26425]: I0217 15:42:15.240610 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-dbus-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.241135 master-0 kubenswrapper[26425]: I0217 15:42:15.240727 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.241135 master-0 kubenswrapper[26425]: I0217 15:42:15.240812 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb6c\" (UniqueName: \"kubernetes.io/projected/b60361e4-a40a-407d-a481-e6549424e165-kube-api-access-vzb6c\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.241135 master-0 kubenswrapper[26425]: I0217 15:42:15.240878 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnchg\" (UniqueName: \"kubernetes.io/projected/3bee787b-9b99-4abc-bc71-c1104a90c3f0-kube-api-access-xnchg\") pod \"nmstate-metrics-58c85c668d-xtbrb\" (UID: \"3bee787b-9b99-4abc-bc71-c1104a90c3f0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" Feb 17 15:42:15.241791 master-0 kubenswrapper[26425]: E0217 15:42:15.241441 26425 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 15:42:15.241791 master-0 kubenswrapper[26425]: E0217 15:42:15.241516 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair podName:28cff934-38e3-467a-9139-9a24c16b96d2 nodeName:}" failed. No retries permitted until 2026-02-17 15:42:15.741498976 +0000 UTC m=+1597.633222794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair") pod "nmstate-webhook-866bcb46dc-4q7kf" (UID: "28cff934-38e3-467a-9139-9a24c16b96d2") : secret "openshift-nmstate-webhook" not found Feb 17 15:42:15.259866 master-0 kubenswrapper[26425]: I0217 15:42:15.259825 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxqf\" (UniqueName: \"kubernetes.io/projected/28cff934-38e3-467a-9139-9a24c16b96d2-kube-api-access-ksxqf\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.269918 master-0 kubenswrapper[26425]: I0217 15:42:15.269882 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnchg\" (UniqueName: \"kubernetes.io/projected/3bee787b-9b99-4abc-bc71-c1104a90c3f0-kube-api-access-xnchg\") pod \"nmstate-metrics-58c85c668d-xtbrb\" (UID: \"3bee787b-9b99-4abc-bc71-c1104a90c3f0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" Feb 17 15:42:15.335632 master-0 kubenswrapper[26425]: I0217 15:42:15.335492 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8w79x" event={"ID":"c38bfe84-f8b8-42e0-b46e-a4f03db76894","Type":"ContainerStarted","Data":"9a1e2a8e5cc9886bae22cd2581a793b88e9cb5614554466b6001011a33f6c503"} Feb 17 15:42:15.335632 master-0 kubenswrapper[26425]: I0217 15:42:15.335549 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8w79x" event={"ID":"c38bfe84-f8b8-42e0-b46e-a4f03db76894","Type":"ContainerStarted","Data":"a12b18475a6f1a71809738c50c01a931a0b0a232e795bdbfcc1b3fefd94b1bcd"} Feb 17 15:42:15.342138 master-0 kubenswrapper[26425]: I0217 15:42:15.342088 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b60361e4-a40a-407d-a481-e6549424e165-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.342138 master-0 kubenswrapper[26425]: I0217 15:42:15.342131 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-nmstate-lock\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.342472 master-0 kubenswrapper[26425]: I0217 15:42:15.342158 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-ovs-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.342472 master-0 kubenswrapper[26425]: I0217 15:42:15.342436 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b60361e4-a40a-407d-a481-e6549424e165-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.343679 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-ovs-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.343839 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-dbus-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.344003 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzb6c\" (UniqueName: \"kubernetes.io/projected/b60361e4-a40a-407d-a481-e6549424e165-kube-api-access-vzb6c\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.344331 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrr7\" (UniqueName: \"kubernetes.io/projected/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-kube-api-access-ncrr7\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.346549 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b60361e4-a40a-407d-a481-e6549424e165-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.346610 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-nmstate-lock\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.346682 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-dbus-socket\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.349639 master-0 kubenswrapper[26425]: I0217 15:42:15.347050 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b60361e4-a40a-407d-a481-e6549424e165-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.363093 master-0 kubenswrapper[26425]: I0217 15:42:15.362580 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrr7\" (UniqueName: \"kubernetes.io/projected/a912bfc9-ee75-4c3b-b945-fc77480d5bbc-kube-api-access-ncrr7\") pod \"nmstate-handler-44nvt\" (UID: \"a912bfc9-ee75-4c3b-b945-fc77480d5bbc\") " pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.368734 master-0 kubenswrapper[26425]: I0217 15:42:15.366607 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzb6c\" (UniqueName: \"kubernetes.io/projected/b60361e4-a40a-407d-a481-e6549424e165-kube-api-access-vzb6c\") pod \"nmstate-console-plugin-5c78fc5d65-c9ckb\" (UID: \"b60361e4-a40a-407d-a481-e6549424e165\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.368734 master-0 kubenswrapper[26425]: I0217 15:42:15.368021 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" Feb 17 15:42:15.414601 master-0 kubenswrapper[26425]: I0217 15:42:15.414531 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5995fb765-xddwx"] Feb 17 15:42:15.415767 master-0 kubenswrapper[26425]: I0217 15:42:15.415731 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.421909 master-0 kubenswrapper[26425]: I0217 15:42:15.421855 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5995fb765-xddwx"] Feb 17 15:42:15.443772 master-0 kubenswrapper[26425]: I0217 15:42:15.443367 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:15.445725 master-0 kubenswrapper[26425]: I0217 15:42:15.445686 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5rc\" (UniqueName: \"kubernetes.io/projected/3382b17e-a7f8-4a5e-af57-a54310d7044b-kube-api-access-wv5rc\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.445809 master-0 kubenswrapper[26425]: I0217 15:42:15.445733 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-oauth-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.445865 master-0 kubenswrapper[26425]: I0217 15:42:15.445845 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-trusted-ca-bundle\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.445921 master-0 kubenswrapper[26425]: I0217 15:42:15.445867 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-oauth-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.445921 master-0 kubenswrapper[26425]: I0217 15:42:15.445900 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.446018 master-0 kubenswrapper[26425]: I0217 15:42:15.445936 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.446018 master-0 kubenswrapper[26425]: I0217 15:42:15.445993 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-service-ca\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.539345 master-0 kubenswrapper[26425]: I0217 15:42:15.539276 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548612 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548691 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548716 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-service-ca\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548772 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5rc\" (UniqueName: \"kubernetes.io/projected/3382b17e-a7f8-4a5e-af57-a54310d7044b-kube-api-access-wv5rc\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548798 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-oauth-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548897 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-trusted-ca-bundle\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.548918 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-oauth-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550333 master-0 kubenswrapper[26425]: I0217 15:42:15.549767 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-oauth-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.550867 master-0 kubenswrapper[26425]: I0217 15:42:15.550778 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-service-ca\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.551092 master-0 kubenswrapper[26425]: I0217 15:42:15.551054 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.552008 master-0 kubenswrapper[26425]: I0217 15:42:15.551909 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3382b17e-a7f8-4a5e-af57-a54310d7044b-trusted-ca-bundle\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.554775 master-0 kubenswrapper[26425]: I0217 15:42:15.554120 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-serving-cert\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.555154 master-0 kubenswrapper[26425]: I0217 15:42:15.555094 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3382b17e-a7f8-4a5e-af57-a54310d7044b-console-oauth-config\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.569177 master-0 kubenswrapper[26425]: I0217 15:42:15.569134 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5rc\" (UniqueName: \"kubernetes.io/projected/3382b17e-a7f8-4a5e-af57-a54310d7044b-kube-api-access-wv5rc\") pod \"console-5995fb765-xddwx\" (UID: \"3382b17e-a7f8-4a5e-af57-a54310d7044b\") " pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.752550 master-0 kubenswrapper[26425]: I0217 15:42:15.752488 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.762200 master-0 kubenswrapper[26425]: I0217 15:42:15.762129 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28cff934-38e3-467a-9139-9a24c16b96d2-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4q7kf\" (UID: \"28cff934-38e3-467a-9139-9a24c16b96d2\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:15.773136 master-0 kubenswrapper[26425]: I0217 15:42:15.773095 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:15.858771 master-0 kubenswrapper[26425]: I0217 15:42:15.858718 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb"] Feb 17 15:42:15.860581 master-0 kubenswrapper[26425]: W0217 15:42:15.859875 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bee787b_9b99_4abc_bc71_c1104a90c3f0.slice/crio-e82a1babdb63cf00e8aa6406b9fb5d97a0c0db967ef6361f7132704f9e09c688 WatchSource:0}: Error finding container e82a1babdb63cf00e8aa6406b9fb5d97a0c0db967ef6361f7132704f9e09c688: Status 404 returned error can't find the container with id e82a1babdb63cf00e8aa6406b9fb5d97a0c0db967ef6361f7132704f9e09c688 Feb 17 15:42:15.968965 master-0 kubenswrapper[26425]: I0217 15:42:15.967592 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb"] Feb 17 15:42:15.972018 master-0 kubenswrapper[26425]: W0217 15:42:15.971963 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb60361e4_a40a_407d_a481_e6549424e165.slice/crio-446de6c1db809016c6e3bec61feb8df30ea0a3657b78c22292f743494da8ec23 WatchSource:0}: Error finding container 446de6c1db809016c6e3bec61feb8df30ea0a3657b78c22292f743494da8ec23: Status 404 returned error can't find the container with id 446de6c1db809016c6e3bec61feb8df30ea0a3657b78c22292f743494da8ec23 Feb 17 15:42:16.019318 master-0 kubenswrapper[26425]: I0217 15:42:16.019223 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:16.233300 master-0 kubenswrapper[26425]: I0217 15:42:16.230737 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5995fb765-xddwx"] Feb 17 15:42:16.238503 master-0 kubenswrapper[26425]: W0217 15:42:16.236420 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3382b17e_a7f8_4a5e_af57_a54310d7044b.slice/crio-944d7c622d1a18bcac971bc3f19ae6e5c4cb5b42b01a838253762213dac584fe WatchSource:0}: Error finding container 944d7c622d1a18bcac971bc3f19ae6e5c4cb5b42b01a838253762213dac584fe: Status 404 returned error can't find the container with id 944d7c622d1a18bcac971bc3f19ae6e5c4cb5b42b01a838253762213dac584fe Feb 17 15:42:16.345925 master-0 kubenswrapper[26425]: I0217 15:42:16.345875 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44nvt" event={"ID":"a912bfc9-ee75-4c3b-b945-fc77480d5bbc","Type":"ContainerStarted","Data":"2ce82686ca9a228eeba18ca4f43210b1de24932d2409d5ed45ed08fd39cac057"} Feb 17 15:42:16.347628 master-0 kubenswrapper[26425]: I0217 15:42:16.347584 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5995fb765-xddwx" event={"ID":"3382b17e-a7f8-4a5e-af57-a54310d7044b","Type":"ContainerStarted","Data":"944d7c622d1a18bcac971bc3f19ae6e5c4cb5b42b01a838253762213dac584fe"} Feb 17 15:42:16.349986 master-0 kubenswrapper[26425]: I0217 15:42:16.349938 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" event={"ID":"b60361e4-a40a-407d-a481-e6549424e165","Type":"ContainerStarted","Data":"446de6c1db809016c6e3bec61feb8df30ea0a3657b78c22292f743494da8ec23"} Feb 17 15:42:16.351382 master-0 kubenswrapper[26425]: I0217 15:42:16.351348 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" event={"ID":"3bee787b-9b99-4abc-bc71-c1104a90c3f0","Type":"ContainerStarted","Data":"e82a1babdb63cf00e8aa6406b9fb5d97a0c0db967ef6361f7132704f9e09c688"} Feb 17 15:42:16.479053 master-0 kubenswrapper[26425]: I0217 15:42:16.479008 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf"] Feb 17 15:42:16.669838 master-0 kubenswrapper[26425]: I0217 15:42:16.669259 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:16.673915 master-0 kubenswrapper[26425]: I0217 15:42:16.673845 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d46593c5-a748-40e4-aac9-e8123c3e024e-memberlist\") pod \"speaker-mj82t\" (UID: \"d46593c5-a748-40e4-aac9-e8123c3e024e\") " pod="metallb-system/speaker-mj82t" Feb 17 15:42:16.837035 master-0 kubenswrapper[26425]: I0217 15:42:16.836916 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mj82t" Feb 17 15:42:17.080118 master-0 kubenswrapper[26425]: I0217 15:42:17.080051 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:42:17.080305 master-0 kubenswrapper[26425]: E0217 15:42:17.080239 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:42:17.080305 master-0 kubenswrapper[26425]: E0217 15:42:17.080275 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:42:17.080394 master-0 kubenswrapper[26425]: E0217 15:42:17.080334 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:44:19.08031668 +0000 UTC m=+1720.972040498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:42:17.112542 master-0 kubenswrapper[26425]: W0217 15:42:17.112504 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46593c5_a748_40e4_aac9_e8123c3e024e.slice/crio-6b60a47e0d1739832f000b3d3fedd64f811e5ba1c6f7515c3176792dac6db2cf WatchSource:0}: Error finding container 6b60a47e0d1739832f000b3d3fedd64f811e5ba1c6f7515c3176792dac6db2cf: Status 404 returned error can't find the container with id 6b60a47e0d1739832f000b3d3fedd64f811e5ba1c6f7515c3176792dac6db2cf Feb 17 15:42:17.366828 master-0 kubenswrapper[26425]: I0217 15:42:17.366774 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mj82t" event={"ID":"d46593c5-a748-40e4-aac9-e8123c3e024e","Type":"ContainerStarted","Data":"6b60a47e0d1739832f000b3d3fedd64f811e5ba1c6f7515c3176792dac6db2cf"} Feb 17 15:42:17.368937 master-0 kubenswrapper[26425]: I0217 15:42:17.368881 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" event={"ID":"28cff934-38e3-467a-9139-9a24c16b96d2","Type":"ContainerStarted","Data":"ae89e0dd9b34ff315521b76a5ea6b1378c884423b41b84315ba7cd03f3a6a265"} Feb 17 15:42:17.375064 master-0 kubenswrapper[26425]: I0217 15:42:17.375013 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5995fb765-xddwx" event={"ID":"3382b17e-a7f8-4a5e-af57-a54310d7044b","Type":"ContainerStarted","Data":"5c3c70ff7785bba04e6d81233c1fc8e75d42b97d50f5ee829548033ede5a0cc8"} Feb 17 15:42:17.379790 master-0 kubenswrapper[26425]: I0217 15:42:17.379747 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8w79x" event={"ID":"c38bfe84-f8b8-42e0-b46e-a4f03db76894","Type":"ContainerStarted","Data":"9e92cdf53e9708f3a7bd03192f7d6f404497e852559321684bca685a252c0eed"} Feb 17 15:42:17.379923 master-0 kubenswrapper[26425]: I0217 15:42:17.379905 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:17.396006 master-0 kubenswrapper[26425]: I0217 15:42:17.395933 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5995fb765-xddwx" podStartSLOduration=2.395832278 podStartE2EDuration="2.395832278s" podCreationTimestamp="2026-02-17 15:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:42:17.391654087 +0000 UTC m=+1599.283377935" watchObservedRunningTime="2026-02-17 15:42:17.395832278 +0000 UTC m=+1599.287556096" Feb 17 15:42:17.410409 master-0 kubenswrapper[26425]: I0217 15:42:17.410348 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-8w79x" podStartSLOduration=2.952019406 podStartE2EDuration="5.410330748s" podCreationTimestamp="2026-02-17 15:42:12 +0000 UTC" firstStartedPulling="2026-02-17 15:42:14.654705735 +0000 UTC m=+1596.546429563" lastFinishedPulling="2026-02-17 15:42:17.113017067 +0000 UTC m=+1599.004740905" observedRunningTime="2026-02-17 15:42:17.408813071 +0000 UTC m=+1599.300536909" watchObservedRunningTime="2026-02-17 15:42:17.410330748 +0000 UTC m=+1599.302054556" Feb 17 15:42:18.406141 master-0 kubenswrapper[26425]: I0217 15:42:18.406070 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-mj82t" Feb 17 15:42:18.406141 master-0 kubenswrapper[26425]: I0217 15:42:18.406128 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mj82t" event={"ID":"d46593c5-a748-40e4-aac9-e8123c3e024e","Type":"ContainerStarted","Data":"ed8fc34bda67bfd24ab1934e89e80f2494f89e1315b853a1c823d60043a0d778"} Feb 17 15:42:18.406141 master-0 kubenswrapper[26425]: I0217 15:42:18.406147 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mj82t" event={"ID":"d46593c5-a748-40e4-aac9-e8123c3e024e","Type":"ContainerStarted","Data":"55bc0bfafe7c5c7d4c2ad00f12c031c4070d4c613c3c6bb5a9807a463dcd750b"} Feb 17 15:42:18.509085 master-0 kubenswrapper[26425]: I0217 15:42:18.509010 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-mj82t" podStartSLOduration=6.508988906 podStartE2EDuration="6.508988906s" podCreationTimestamp="2026-02-17 15:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:42:18.502896289 +0000 UTC m=+1600.394620137" watchObservedRunningTime="2026-02-17 15:42:18.508988906 +0000 UTC m=+1600.400712724" Feb 17 15:42:21.433335 master-0 kubenswrapper[26425]: I0217 15:42:21.433269 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44nvt" event={"ID":"a912bfc9-ee75-4c3b-b945-fc77480d5bbc","Type":"ContainerStarted","Data":"7bf143f1a25d0b564d1f05cf27fd6681b12bce1a87f235eb6c624fb0d0f3ac0b"} Feb 17 15:42:21.433733 master-0 kubenswrapper[26425]: I0217 15:42:21.433394 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:21.435863 master-0 kubenswrapper[26425]: I0217 15:42:21.435831 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" event={"ID":"25008115-e70b-470c-892c-02ce884bb721","Type":"ContainerStarted","Data":"1048c4968804ebe06495b5c8c0b1d4214816e2decbfd14b2c41419c2e05bee1f"} Feb 17 15:42:21.436098 master-0 kubenswrapper[26425]: I0217 15:42:21.436040 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:21.443432 master-0 kubenswrapper[26425]: I0217 15:42:21.443255 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" event={"ID":"28cff934-38e3-467a-9139-9a24c16b96d2","Type":"ContainerStarted","Data":"bf2e48c7800b95eece5355a31e1fbec74f44bd791995ade04ee6f36ecd0f9322"} Feb 17 15:42:21.443432 master-0 kubenswrapper[26425]: I0217 15:42:21.443372 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:21.446250 master-0 kubenswrapper[26425]: I0217 15:42:21.446212 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" event={"ID":"b60361e4-a40a-407d-a481-e6549424e165","Type":"ContainerStarted","Data":"003d2960593d6217684b65ece324314498db144a884372c717352fd2acc02b01"} Feb 17 15:42:21.448562 master-0 kubenswrapper[26425]: I0217 15:42:21.448248 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" event={"ID":"3bee787b-9b99-4abc-bc71-c1104a90c3f0","Type":"ContainerStarted","Data":"f504b618a123eb2f86f8651a3780e259a5f7b6a29ba179bd5534b3da3b4ff0e9"} Feb 17 15:42:21.453207 master-0 kubenswrapper[26425]: I0217 15:42:21.453163 26425 generic.go:334] "Generic (PLEG): container finished" podID="acbd5225-9914-47ea-a945-c4e425c734c2" containerID="170e5d509c279c7d60859526a0b47b3bf9daf7a69e1e4ad3a5ca72630c8ba19c" exitCode=0 Feb 17 15:42:21.453310 master-0 kubenswrapper[26425]: I0217 15:42:21.453209 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerDied","Data":"170e5d509c279c7d60859526a0b47b3bf9daf7a69e1e4ad3a5ca72630c8ba19c"} Feb 17 15:42:21.473324 master-0 kubenswrapper[26425]: I0217 15:42:21.473231 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-44nvt" podStartSLOduration=1.053798418 podStartE2EDuration="6.473207032s" podCreationTimestamp="2026-02-17 15:42:15 +0000 UTC" firstStartedPulling="2026-02-17 15:42:15.476723991 +0000 UTC m=+1597.368447809" lastFinishedPulling="2026-02-17 15:42:20.896132595 +0000 UTC m=+1602.787856423" observedRunningTime="2026-02-17 15:42:21.45690057 +0000 UTC m=+1603.348624408" watchObservedRunningTime="2026-02-17 15:42:21.473207032 +0000 UTC m=+1603.364930870" Feb 17 15:42:21.527839 master-0 kubenswrapper[26425]: I0217 15:42:21.527708 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb" podStartSLOduration=1.59325859 podStartE2EDuration="6.527680604s" podCreationTimestamp="2026-02-17 15:42:15 +0000 UTC" firstStartedPulling="2026-02-17 15:42:15.976663671 +0000 UTC m=+1597.868387489" lastFinishedPulling="2026-02-17 15:42:20.911085675 +0000 UTC m=+1602.802809503" observedRunningTime="2026-02-17 15:42:21.520815418 +0000 UTC m=+1603.412539236" watchObservedRunningTime="2026-02-17 15:42:21.527680604 +0000 UTC m=+1603.419404442" Feb 17 15:42:21.571988 master-0 kubenswrapper[26425]: I0217 15:42:21.571888 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" podStartSLOduration=2.13308396 podStartE2EDuration="6.571865518s" podCreationTimestamp="2026-02-17 15:42:15 +0000 UTC" firstStartedPulling="2026-02-17 15:42:16.487421441 +0000 UTC m=+1598.379145269" lastFinishedPulling="2026-02-17 15:42:20.926202999 +0000 UTC m=+1602.817926827" observedRunningTime="2026-02-17 15:42:21.559225514 +0000 UTC m=+1603.450949332" watchObservedRunningTime="2026-02-17 15:42:21.571865518 +0000 UTC m=+1603.463589336" Feb 17 15:42:21.614285 master-0 kubenswrapper[26425]: I0217 15:42:21.603605 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" podStartSLOduration=2.201923779 podStartE2EDuration="9.603580541s" podCreationTimestamp="2026-02-17 15:42:12 +0000 UTC" firstStartedPulling="2026-02-17 15:42:13.608763495 +0000 UTC m=+1595.500487343" lastFinishedPulling="2026-02-17 15:42:21.010420287 +0000 UTC m=+1602.902144105" observedRunningTime="2026-02-17 15:42:21.591228725 +0000 UTC m=+1603.482952553" watchObservedRunningTime="2026-02-17 15:42:21.603580541 +0000 UTC m=+1603.495304359" Feb 17 15:42:22.464553 master-0 kubenswrapper[26425]: I0217 15:42:22.464426 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" event={"ID":"3bee787b-9b99-4abc-bc71-c1104a90c3f0","Type":"ContainerStarted","Data":"f42cbb8795c12b77afbd0f8c5c1a648ab187d45f2003ab33f5b00cd15d090e65"} Feb 17 15:42:22.466915 master-0 kubenswrapper[26425]: I0217 15:42:22.466838 26425 generic.go:334] "Generic (PLEG): container finished" podID="acbd5225-9914-47ea-a945-c4e425c734c2" containerID="0c77adbe092746610fd326aba7b6560e4232b6a989ae6bd9663d4247662d79b9" exitCode=0 Feb 17 15:42:22.467092 master-0 kubenswrapper[26425]: I0217 15:42:22.466910 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerDied","Data":"0c77adbe092746610fd326aba7b6560e4232b6a989ae6bd9663d4247662d79b9"} Feb 17 15:42:22.496895 master-0 kubenswrapper[26425]: I0217 15:42:22.496804 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb" podStartSLOduration=3.437683804 podStartE2EDuration="8.496781292s" podCreationTimestamp="2026-02-17 15:42:14 +0000 UTC" firstStartedPulling="2026-02-17 15:42:15.864249053 +0000 UTC m=+1597.755972881" lastFinishedPulling="2026-02-17 15:42:20.923346551 +0000 UTC m=+1602.815070369" observedRunningTime="2026-02-17 15:42:22.489561269 +0000 UTC m=+1604.381285107" watchObservedRunningTime="2026-02-17 15:42:22.496781292 +0000 UTC m=+1604.388505110" Feb 17 15:42:23.486220 master-0 kubenswrapper[26425]: I0217 15:42:23.486142 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerDied","Data":"6a7394dbddbc9ad16ec79b57fa5f8be7211e7d169182dd64652d218de1e1199e"} Feb 17 15:42:23.487204 master-0 kubenswrapper[26425]: I0217 15:42:23.485769 26425 generic.go:334] "Generic (PLEG): container finished" podID="acbd5225-9914-47ea-a945-c4e425c734c2" containerID="6a7394dbddbc9ad16ec79b57fa5f8be7211e7d169182dd64652d218de1e1199e" exitCode=0 Feb 17 15:42:24.505913 master-0 kubenswrapper[26425]: I0217 15:42:24.505858 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"18165cfb20986c2bd21da669f25917b5e9f90394bb9fed36a960a7268bfed90a"} Feb 17 15:42:24.505913 master-0 kubenswrapper[26425]: I0217 15:42:24.505905 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"ee2f571f5750bb20d4a1ab2b1cf14de29a42064dbd4f566da6abfd9c28421917"} Feb 17 15:42:24.505913 master-0 kubenswrapper[26425]: I0217 15:42:24.505915 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"e42f0f17fb21081d2a533ac86c5c119f9c9d800b98cccdca1328e54cb33485e9"} Feb 17 15:42:24.506608 master-0 kubenswrapper[26425]: I0217 15:42:24.505925 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"071a668f100184c549099ae6cfa9e815b84465ae1eac8910396c42e73cfaaf8f"} Feb 17 15:42:25.527369 master-0 kubenswrapper[26425]: I0217 15:42:25.527309 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"ba7ff493879ebe9b07f25376c1393dc2fa10d44a01c0137d8889660be690b92d"} Feb 17 15:42:25.528273 master-0 kubenswrapper[26425]: I0217 15:42:25.528239 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:25.528418 master-0 kubenswrapper[26425]: I0217 15:42:25.528392 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5g7s" event={"ID":"acbd5225-9914-47ea-a945-c4e425c734c2","Type":"ContainerStarted","Data":"d08347762252d813846dbf859336f73af2099c8171db49a8098fde06f859d954"} Feb 17 15:42:25.589317 master-0 kubenswrapper[26425]: I0217 15:42:25.589201 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t5g7s" podStartSLOduration=6.544934191 podStartE2EDuration="13.589173456s" podCreationTimestamp="2026-02-17 15:42:12 +0000 UTC" firstStartedPulling="2026-02-17 15:42:13.879537336 +0000 UTC m=+1595.771261174" lastFinishedPulling="2026-02-17 15:42:20.923776621 +0000 UTC m=+1602.815500439" observedRunningTime="2026-02-17 15:42:25.580735163 +0000 UTC m=+1607.472459021" watchObservedRunningTime="2026-02-17 15:42:25.589173456 +0000 UTC m=+1607.480897284" Feb 17 15:42:25.773957 master-0 kubenswrapper[26425]: I0217 15:42:25.773895 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:25.773957 master-0 kubenswrapper[26425]: I0217 15:42:25.773947 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:25.783684 master-0 kubenswrapper[26425]: I0217 15:42:25.783538 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:26.548208 master-0 kubenswrapper[26425]: I0217 15:42:26.548120 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5995fb765-xddwx" Feb 17 15:42:26.701242 master-0 kubenswrapper[26425]: I0217 15:42:26.701175 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:42:28.748998 master-0 kubenswrapper[26425]: I0217 15:42:28.748914 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:28.816794 master-0 kubenswrapper[26425]: I0217 15:42:28.816650 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:30.476155 master-0 kubenswrapper[26425]: I0217 15:42:30.476068 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-44nvt" Feb 17 15:42:33.144690 master-0 kubenswrapper[26425]: I0217 15:42:33.144626 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls" Feb 17 15:42:33.753905 master-0 kubenswrapper[26425]: I0217 15:42:33.753856 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t5g7s" Feb 17 15:42:33.859437 master-0 kubenswrapper[26425]: I0217 15:42:33.859348 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-8w79x" Feb 17 15:42:36.027881 master-0 kubenswrapper[26425]: I0217 15:42:36.027750 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf" Feb 17 15:42:36.846707 master-0 kubenswrapper[26425]: I0217 15:42:36.846262 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-mj82t" Feb 17 15:42:42.778732 master-0 kubenswrapper[26425]: I0217 15:42:42.778673 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-5rvk7"] Feb 17 15:42:42.780148 master-0 kubenswrapper[26425]: I0217 15:42:42.780109 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.782214 master-0 kubenswrapper[26425]: I0217 15:42:42.782171 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 17 15:42:42.805908 master-0 kubenswrapper[26425]: I0217 15:42:42.805845 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-5rvk7"] Feb 17 15:42:42.836994 master-0 kubenswrapper[26425]: I0217 15:42:42.836941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-file-lock-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.836994 master-0 kubenswrapper[26425]: I0217 15:42:42.836986 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-run-udev\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.837252 master-0 kubenswrapper[26425]: I0217 15:42:42.837114 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-pod-volumes-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.837252 master-0 kubenswrapper[26425]: I0217 15:42:42.837138 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-registration-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.837329 master-0 kubenswrapper[26425]: I0217 15:42:42.837237 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-lvmd-config\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840652 master-0 kubenswrapper[26425]: I0217 15:42:42.840615 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-device-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840785 master-0 kubenswrapper[26425]: I0217 15:42:42.840673 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-csi-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840785 master-0 kubenswrapper[26425]: I0217 15:42:42.840700 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjq4v\" (UniqueName: \"kubernetes.io/projected/9462de64-930e-4553-9dea-d7d7d0b6a1a0-kube-api-access-wjq4v\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840785 master-0 kubenswrapper[26425]: I0217 15:42:42.840761 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-sys\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840934 master-0 kubenswrapper[26425]: I0217 15:42:42.840794 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-node-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.840934 master-0 kubenswrapper[26425]: I0217 15:42:42.840808 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9462de64-930e-4553-9dea-d7d7d0b6a1a0-metrics-cert\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942565 master-0 kubenswrapper[26425]: I0217 15:42:42.942518 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-file-lock-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942565 master-0 kubenswrapper[26425]: I0217 15:42:42.942560 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-run-udev\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942814 master-0 kubenswrapper[26425]: I0217 15:42:42.942594 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-pod-volumes-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942814 master-0 kubenswrapper[26425]: I0217 15:42:42.942614 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-registration-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942917 master-0 kubenswrapper[26425]: I0217 15:42:42.942892 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-lvmd-config\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.942984 master-0 kubenswrapper[26425]: I0217 15:42:42.942964 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-device-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.943031 master-0 kubenswrapper[26425]: I0217 15:42:42.942995 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-csi-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.943031 master-0 kubenswrapper[26425]: I0217 15:42:42.943018 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjq4v\" (UniqueName: \"kubernetes.io/projected/9462de64-930e-4553-9dea-d7d7d0b6a1a0-kube-api-access-wjq4v\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.943099 master-0 kubenswrapper[26425]: I0217 15:42:42.943056 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-sys\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.943099 master-0 kubenswrapper[26425]: I0217 15:42:42.943074 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-node-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.943099 master-0 kubenswrapper[26425]: I0217 15:42:42.943097 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9462de64-930e-4553-9dea-d7d7d0b6a1a0-metrics-cert\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.944197 master-0 kubenswrapper[26425]: I0217 15:42:42.943832 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-lvmd-config\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945188 master-0 kubenswrapper[26425]: I0217 15:42:42.944071 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-sys\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945257 master-0 kubenswrapper[26425]: I0217 15:42:42.944113 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-file-lock-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945306 master-0 kubenswrapper[26425]: I0217 15:42:42.944196 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-registration-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945306 master-0 kubenswrapper[26425]: I0217 15:42:42.944422 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-csi-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945370 master-0 kubenswrapper[26425]: I0217 15:42:42.944443 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-node-plugin-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945504 master-0 kubenswrapper[26425]: I0217 15:42:42.944499 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-pod-volumes-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945552 master-0 kubenswrapper[26425]: I0217 15:42:42.944498 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-device-dir\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.945552 master-0 kubenswrapper[26425]: I0217 15:42:42.944512 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9462de64-930e-4553-9dea-d7d7d0b6a1a0-run-udev\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.949611 master-0 kubenswrapper[26425]: I0217 15:42:42.949574 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9462de64-930e-4553-9dea-d7d7d0b6a1a0-metrics-cert\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:42.959016 master-0 kubenswrapper[26425]: I0217 15:42:42.958974 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjq4v\" (UniqueName: \"kubernetes.io/projected/9462de64-930e-4553-9dea-d7d7d0b6a1a0-kube-api-access-wjq4v\") pod \"vg-manager-5rvk7\" (UID: \"9462de64-930e-4553-9dea-d7d7d0b6a1a0\") " pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:43.105041 master-0 kubenswrapper[26425]: I0217 15:42:43.104859 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:43.660927 master-0 kubenswrapper[26425]: I0217 15:42:43.659829 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-5rvk7"] Feb 17 15:42:43.808618 master-0 kubenswrapper[26425]: I0217 15:42:43.808564 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5rvk7" event={"ID":"9462de64-930e-4553-9dea-d7d7d0b6a1a0","Type":"ContainerStarted","Data":"2dd16c8f0255ec67aaca9e9f172fce87cf28bd838065b47c44e8d1bb95cc8839"} Feb 17 15:42:44.830415 master-0 kubenswrapper[26425]: I0217 15:42:44.830299 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5rvk7" event={"ID":"9462de64-930e-4553-9dea-d7d7d0b6a1a0","Type":"ContainerStarted","Data":"1452be5c8277caccebb5f8a8d499400344c7f718459146fc857178d24cd9e32d"} Feb 17 15:42:44.876771 master-0 kubenswrapper[26425]: I0217 15:42:44.876624 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-5rvk7" podStartSLOduration=2.876597909 podStartE2EDuration="2.876597909s" podCreationTimestamp="2026-02-17 15:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:42:44.864171021 +0000 UTC m=+1626.755894859" watchObservedRunningTime="2026-02-17 15:42:44.876597909 +0000 UTC m=+1626.768321757" Feb 17 15:42:45.840056 master-0 kubenswrapper[26425]: I0217 15:42:45.839941 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-5rvk7_9462de64-930e-4553-9dea-d7d7d0b6a1a0/vg-manager/0.log" Feb 17 15:42:45.840056 master-0 kubenswrapper[26425]: I0217 15:42:45.840020 26425 generic.go:334] "Generic (PLEG): container finished" podID="9462de64-930e-4553-9dea-d7d7d0b6a1a0" containerID="1452be5c8277caccebb5f8a8d499400344c7f718459146fc857178d24cd9e32d" exitCode=1 Feb 17 15:42:45.840056 master-0 kubenswrapper[26425]: I0217 15:42:45.840057 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5rvk7" event={"ID":"9462de64-930e-4553-9dea-d7d7d0b6a1a0","Type":"ContainerDied","Data":"1452be5c8277caccebb5f8a8d499400344c7f718459146fc857178d24cd9e32d"} Feb 17 15:42:45.841042 master-0 kubenswrapper[26425]: I0217 15:42:45.840640 26425 scope.go:117] "RemoveContainer" containerID="1452be5c8277caccebb5f8a8d499400344c7f718459146fc857178d24cd9e32d" Feb 17 15:42:46.284317 master-0 kubenswrapper[26425]: I0217 15:42:46.284167 26425 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 17 15:42:46.469420 master-0 kubenswrapper[26425]: I0217 15:42:46.468607 26425 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-17T15:42:46.284213379Z","Handler":null,"Name":""} Feb 17 15:42:46.472935 master-0 kubenswrapper[26425]: I0217 15:42:46.472874 26425 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 17 15:42:46.473141 master-0 kubenswrapper[26425]: I0217 15:42:46.472959 26425 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 17 15:42:46.859326 master-0 kubenswrapper[26425]: I0217 15:42:46.859244 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-5rvk7_9462de64-930e-4553-9dea-d7d7d0b6a1a0/vg-manager/0.log" Feb 17 15:42:46.859326 master-0 kubenswrapper[26425]: I0217 15:42:46.859332 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5rvk7" event={"ID":"9462de64-930e-4553-9dea-d7d7d0b6a1a0","Type":"ContainerStarted","Data":"baabc04f8a4efcd489d515d0c573006be9a43d2fab5988e7e999d20665708d15"} Feb 17 15:42:46.914399 master-0 kubenswrapper[26425]: I0217 15:42:46.914325 26425 scope.go:117] "RemoveContainer" containerID="a1bf1a7e1900bf2718fe7ec35df9cdfd995d49924e5c050fc18a197ec60d89c3" Feb 17 15:42:49.494931 master-0 kubenswrapper[26425]: I0217 15:42:49.494860 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:49.496172 master-0 kubenswrapper[26425]: I0217 15:42:49.496146 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:49.497876 master-0 kubenswrapper[26425]: I0217 15:42:49.497822 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 15:42:49.499214 master-0 kubenswrapper[26425]: I0217 15:42:49.499185 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 15:42:49.507268 master-0 kubenswrapper[26425]: I0217 15:42:49.506814 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:49.595663 master-0 kubenswrapper[26425]: I0217 15:42:49.595582 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462z7\" (UniqueName: \"kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7\") pod \"openstack-operator-index-gzfb5\" (UID: \"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4\") " pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:49.699251 master-0 kubenswrapper[26425]: I0217 15:42:49.699125 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462z7\" (UniqueName: \"kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7\") pod \"openstack-operator-index-gzfb5\" (UID: \"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4\") " pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:49.715772 master-0 kubenswrapper[26425]: I0217 15:42:49.715688 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462z7\" (UniqueName: \"kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7\") pod \"openstack-operator-index-gzfb5\" (UID: \"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4\") " pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:49.827123 master-0 kubenswrapper[26425]: I0217 15:42:49.827008 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:50.285285 master-0 kubenswrapper[26425]: I0217 15:42:50.285198 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:50.295614 master-0 kubenswrapper[26425]: W0217 15:42:50.295541 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea23c11f_d8c5_4658_bd2a_d5f002ccb8d4.slice/crio-f02c56a0dfcf6f6fe53166db49d4a824e81af95c2a3c0f3ce6dcdf5739a9f11e WatchSource:0}: Error finding container f02c56a0dfcf6f6fe53166db49d4a824e81af95c2a3c0f3ce6dcdf5739a9f11e: Status 404 returned error can't find the container with id f02c56a0dfcf6f6fe53166db49d4a824e81af95c2a3c0f3ce6dcdf5739a9f11e Feb 17 15:42:50.907376 master-0 kubenswrapper[26425]: I0217 15:42:50.907262 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzfb5" event={"ID":"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4","Type":"ContainerStarted","Data":"f02c56a0dfcf6f6fe53166db49d4a824e81af95c2a3c0f3ce6dcdf5739a9f11e"} Feb 17 15:42:51.767577 master-0 kubenswrapper[26425]: I0217 15:42:51.767402 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f45cc898f-z9tb2" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" containerID="cri-o://0474b8136e589e950dfdf97972c8099e9d1031f92d766013923cc056ae834926" gracePeriod=15 Feb 17 15:42:51.921028 master-0 kubenswrapper[26425]: I0217 15:42:51.920964 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f45cc898f-z9tb2_a38fb686-debe-482b-ae85-3172fd731fba/console/0.log" Feb 17 15:42:51.921701 master-0 kubenswrapper[26425]: I0217 15:42:51.921069 26425 generic.go:334] "Generic (PLEG): container finished" podID="a38fb686-debe-482b-ae85-3172fd731fba" containerID="0474b8136e589e950dfdf97972c8099e9d1031f92d766013923cc056ae834926" exitCode=2 Feb 17 15:42:51.921701 master-0 kubenswrapper[26425]: I0217 15:42:51.921124 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f45cc898f-z9tb2" event={"ID":"a38fb686-debe-482b-ae85-3172fd731fba","Type":"ContainerDied","Data":"0474b8136e589e950dfdf97972c8099e9d1031f92d766013923cc056ae834926"} Feb 17 15:42:52.408548 master-0 kubenswrapper[26425]: I0217 15:42:52.408088 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f45cc898f-z9tb2_a38fb686-debe-482b-ae85-3172fd731fba/console/0.log" Feb 17 15:42:52.408548 master-0 kubenswrapper[26425]: I0217 15:42:52.408162 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:42:52.559285 master-0 kubenswrapper[26425]: I0217 15:42:52.559196 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.560269 master-0 kubenswrapper[26425]: I0217 15:42:52.559435 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54npd\" (UniqueName: \"kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.560269 master-0 kubenswrapper[26425]: I0217 15:42:52.559543 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.560269 master-0 kubenswrapper[26425]: I0217 15:42:52.559602 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.560269 master-0 kubenswrapper[26425]: I0217 15:42:52.559702 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.560269 master-0 kubenswrapper[26425]: I0217 15:42:52.559745 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.561823 master-0 kubenswrapper[26425]: I0217 15:42:52.561172 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca" (OuterVolumeSpecName: "service-ca") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:42:52.561823 master-0 kubenswrapper[26425]: I0217 15:42:52.561289 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:42:52.561823 master-0 kubenswrapper[26425]: I0217 15:42:52.561321 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:42:52.561823 master-0 kubenswrapper[26425]: I0217 15:42:52.561353 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config" (OuterVolumeSpecName: "console-config") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:42:52.561823 master-0 kubenswrapper[26425]: I0217 15:42:52.561439 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config\") pod \"a38fb686-debe-482b-ae85-3172fd731fba\" (UID: \"a38fb686-debe-482b-ae85-3172fd731fba\") " Feb 17 15:42:52.562775 master-0 kubenswrapper[26425]: I0217 15:42:52.562202 26425 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.562775 master-0 kubenswrapper[26425]: I0217 15:42:52.562231 26425 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.562775 master-0 kubenswrapper[26425]: I0217 15:42:52.562250 26425 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.562775 master-0 kubenswrapper[26425]: I0217 15:42:52.562270 26425 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a38fb686-debe-482b-ae85-3172fd731fba-console-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.563668 master-0 kubenswrapper[26425]: I0217 15:42:52.562868 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd" (OuterVolumeSpecName: "kube-api-access-54npd") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "kube-api-access-54npd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:42:52.565133 master-0 kubenswrapper[26425]: I0217 15:42:52.565026 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:42:52.566283 master-0 kubenswrapper[26425]: I0217 15:42:52.566215 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a38fb686-debe-482b-ae85-3172fd731fba" (UID: "a38fb686-debe-482b-ae85-3172fd731fba"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:42:52.664640 master-0 kubenswrapper[26425]: I0217 15:42:52.664511 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54npd\" (UniqueName: \"kubernetes.io/projected/a38fb686-debe-482b-ae85-3172fd731fba-kube-api-access-54npd\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.664640 master-0 kubenswrapper[26425]: I0217 15:42:52.664603 26425 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.664640 master-0 kubenswrapper[26425]: I0217 15:42:52.664635 26425 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a38fb686-debe-482b-ae85-3172fd731fba-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:52.935567 master-0 kubenswrapper[26425]: I0217 15:42:52.935160 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f45cc898f-z9tb2_a38fb686-debe-482b-ae85-3172fd731fba/console/0.log" Feb 17 15:42:52.935567 master-0 kubenswrapper[26425]: I0217 15:42:52.935290 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f45cc898f-z9tb2" event={"ID":"a38fb686-debe-482b-ae85-3172fd731fba","Type":"ContainerDied","Data":"6f0f6eff922253435e77eabb6457430057d3a48a34d9b1826838d4828bdeab04"} Feb 17 15:42:52.935567 master-0 kubenswrapper[26425]: I0217 15:42:52.935338 26425 scope.go:117] "RemoveContainer" containerID="0474b8136e589e950dfdf97972c8099e9d1031f92d766013923cc056ae834926" Feb 17 15:42:52.935567 master-0 kubenswrapper[26425]: I0217 15:42:52.935448 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f45cc898f-z9tb2" Feb 17 15:42:52.938931 master-0 kubenswrapper[26425]: I0217 15:42:52.938860 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzfb5" event={"ID":"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4","Type":"ContainerStarted","Data":"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457"} Feb 17 15:42:52.987025 master-0 kubenswrapper[26425]: I0217 15:42:52.979743 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gzfb5" podStartSLOduration=2.588818259 podStartE2EDuration="3.979721475s" podCreationTimestamp="2026-02-17 15:42:49 +0000 UTC" firstStartedPulling="2026-02-17 15:42:50.300230985 +0000 UTC m=+1632.191954833" lastFinishedPulling="2026-02-17 15:42:51.691134191 +0000 UTC m=+1633.582858049" observedRunningTime="2026-02-17 15:42:52.968924044 +0000 UTC m=+1634.860647952" watchObservedRunningTime="2026-02-17 15:42:52.979721475 +0000 UTC m=+1634.871445303" Feb 17 15:42:53.006185 master-0 kubenswrapper[26425]: I0217 15:42:53.006122 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:42:53.021389 master-0 kubenswrapper[26425]: I0217 15:42:53.021302 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f45cc898f-z9tb2"] Feb 17 15:42:53.105040 master-0 kubenswrapper[26425]: I0217 15:42:53.104978 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:53.109147 master-0 kubenswrapper[26425]: I0217 15:42:53.109075 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:53.452022 master-0 kubenswrapper[26425]: I0217 15:42:53.451886 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:53.952079 master-0 kubenswrapper[26425]: I0217 15:42:53.951988 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:53.953426 master-0 kubenswrapper[26425]: I0217 15:42:53.953363 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-5rvk7" Feb 17 15:42:54.058510 master-0 kubenswrapper[26425]: I0217 15:42:54.058187 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-chx5x"] Feb 17 15:42:54.058915 master-0 kubenswrapper[26425]: E0217 15:42:54.058634 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" Feb 17 15:42:54.058915 master-0 kubenswrapper[26425]: I0217 15:42:54.058653 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" Feb 17 15:42:54.059128 master-0 kubenswrapper[26425]: I0217 15:42:54.058922 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38fb686-debe-482b-ae85-3172fd731fba" containerName="console" Feb 17 15:42:54.061096 master-0 kubenswrapper[26425]: I0217 15:42:54.059571 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:42:54.088365 master-0 kubenswrapper[26425]: I0217 15:42:54.088319 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-chx5x"] Feb 17 15:42:54.195040 master-0 kubenswrapper[26425]: I0217 15:42:54.194980 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fzj\" (UniqueName: \"kubernetes.io/projected/973470c7-1ae5-4766-b49f-47cf06828412-kube-api-access-k2fzj\") pod \"openstack-operator-index-chx5x\" (UID: \"973470c7-1ae5-4766-b49f-47cf06828412\") " pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:42:54.298050 master-0 kubenswrapper[26425]: I0217 15:42:54.297871 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fzj\" (UniqueName: \"kubernetes.io/projected/973470c7-1ae5-4766-b49f-47cf06828412-kube-api-access-k2fzj\") pod \"openstack-operator-index-chx5x\" (UID: \"973470c7-1ae5-4766-b49f-47cf06828412\") " pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:42:54.329740 master-0 kubenswrapper[26425]: I0217 15:42:54.329688 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fzj\" (UniqueName: \"kubernetes.io/projected/973470c7-1ae5-4766-b49f-47cf06828412-kube-api-access-k2fzj\") pod \"openstack-operator-index-chx5x\" (UID: \"973470c7-1ae5-4766-b49f-47cf06828412\") " pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:42:54.391790 master-0 kubenswrapper[26425]: I0217 15:42:54.391726 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:42:54.420852 master-0 kubenswrapper[26425]: I0217 15:42:54.420788 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38fb686-debe-482b-ae85-3172fd731fba" path="/var/lib/kubelet/pods/a38fb686-debe-482b-ae85-3172fd731fba/volumes" Feb 17 15:42:54.909119 master-0 kubenswrapper[26425]: I0217 15:42:54.906332 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-chx5x"] Feb 17 15:42:54.909589 master-0 kubenswrapper[26425]: W0217 15:42:54.909318 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod973470c7_1ae5_4766_b49f_47cf06828412.slice/crio-73c9fddacca9a9384d359494a58e465ff009925f1e468f1222b932a9c975f06d WatchSource:0}: Error finding container 73c9fddacca9a9384d359494a58e465ff009925f1e468f1222b932a9c975f06d: Status 404 returned error can't find the container with id 73c9fddacca9a9384d359494a58e465ff009925f1e468f1222b932a9c975f06d Feb 17 15:42:54.965835 master-0 kubenswrapper[26425]: I0217 15:42:54.965734 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-chx5x" event={"ID":"973470c7-1ae5-4766-b49f-47cf06828412","Type":"ContainerStarted","Data":"73c9fddacca9a9384d359494a58e465ff009925f1e468f1222b932a9c975f06d"} Feb 17 15:42:54.966425 master-0 kubenswrapper[26425]: I0217 15:42:54.965914 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gzfb5" podUID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" containerName="registry-server" containerID="cri-o://12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457" gracePeriod=2 Feb 17 15:42:55.588731 master-0 kubenswrapper[26425]: I0217 15:42:55.588662 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:55.728661 master-0 kubenswrapper[26425]: I0217 15:42:55.728491 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462z7\" (UniqueName: \"kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7\") pod \"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4\" (UID: \"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4\") " Feb 17 15:42:55.733102 master-0 kubenswrapper[26425]: I0217 15:42:55.732980 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7" (OuterVolumeSpecName: "kube-api-access-462z7") pod "ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" (UID: "ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4"). InnerVolumeSpecName "kube-api-access-462z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:42:55.831484 master-0 kubenswrapper[26425]: I0217 15:42:55.830739 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462z7\" (UniqueName: \"kubernetes.io/projected/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4-kube-api-access-462z7\") on node \"master-0\" DevicePath \"\"" Feb 17 15:42:55.979774 master-0 kubenswrapper[26425]: I0217 15:42:55.979603 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-chx5x" event={"ID":"973470c7-1ae5-4766-b49f-47cf06828412","Type":"ContainerStarted","Data":"84cc3bc2a368bad78c67755ab82d37bedea256f225267239ba694ff16f561045"} Feb 17 15:42:55.982916 master-0 kubenswrapper[26425]: I0217 15:42:55.982854 26425 generic.go:334] "Generic (PLEG): container finished" podID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" containerID="12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457" exitCode=0 Feb 17 15:42:55.983075 master-0 kubenswrapper[26425]: I0217 15:42:55.982921 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzfb5" event={"ID":"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4","Type":"ContainerDied","Data":"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457"} Feb 17 15:42:55.983075 master-0 kubenswrapper[26425]: I0217 15:42:55.982978 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzfb5" Feb 17 15:42:55.983075 master-0 kubenswrapper[26425]: I0217 15:42:55.983010 26425 scope.go:117] "RemoveContainer" containerID="12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457" Feb 17 15:42:55.983347 master-0 kubenswrapper[26425]: I0217 15:42:55.982983 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzfb5" event={"ID":"ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4","Type":"ContainerDied","Data":"f02c56a0dfcf6f6fe53166db49d4a824e81af95c2a3c0f3ce6dcdf5739a9f11e"} Feb 17 15:42:56.006747 master-0 kubenswrapper[26425]: I0217 15:42:56.006691 26425 scope.go:117] "RemoveContainer" containerID="12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457" Feb 17 15:42:56.007244 master-0 kubenswrapper[26425]: E0217 15:42:56.007192 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457\": container with ID starting with 12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457 not found: ID does not exist" containerID="12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457" Feb 17 15:42:56.007310 master-0 kubenswrapper[26425]: I0217 15:42:56.007250 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457"} err="failed to get container status \"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457\": rpc error: code = NotFound desc = could not find container \"12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457\": container with ID starting with 12135314ee1bdf5ed7f7fcf31f1713245c006606e7dd3d9d41f2039676383457 not found: ID does not exist" Feb 17 15:42:56.028343 master-0 kubenswrapper[26425]: I0217 15:42:56.028208 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-chx5x" podStartSLOduration=1.622829377 podStartE2EDuration="2.028181119s" podCreationTimestamp="2026-02-17 15:42:54 +0000 UTC" firstStartedPulling="2026-02-17 15:42:54.919655533 +0000 UTC m=+1636.811379361" lastFinishedPulling="2026-02-17 15:42:55.325007285 +0000 UTC m=+1637.216731103" observedRunningTime="2026-02-17 15:42:56.015219407 +0000 UTC m=+1637.906943265" watchObservedRunningTime="2026-02-17 15:42:56.028181119 +0000 UTC m=+1637.919904977" Feb 17 15:42:56.051896 master-0 kubenswrapper[26425]: I0217 15:42:56.051831 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:56.061614 master-0 kubenswrapper[26425]: I0217 15:42:56.061536 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gzfb5"] Feb 17 15:42:56.413978 master-0 kubenswrapper[26425]: I0217 15:42:56.413860 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" path="/var/lib/kubelet/pods/ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4/volumes" Feb 17 15:43:04.392231 master-0 kubenswrapper[26425]: I0217 15:43:04.392130 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:43:04.392231 master-0 kubenswrapper[26425]: I0217 15:43:04.392225 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:43:04.425616 master-0 kubenswrapper[26425]: I0217 15:43:04.425555 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:43:05.158681 master-0 kubenswrapper[26425]: I0217 15:43:05.158615 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-chx5x" Feb 17 15:43:12.566421 master-0 kubenswrapper[26425]: I0217 15:43:12.566343 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p"] Feb 17 15:43:12.567086 master-0 kubenswrapper[26425]: E0217 15:43:12.566709 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" containerName="registry-server" Feb 17 15:43:12.567086 master-0 kubenswrapper[26425]: I0217 15:43:12.566723 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" containerName="registry-server" Feb 17 15:43:12.567086 master-0 kubenswrapper[26425]: I0217 15:43:12.566907 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea23c11f-d8c5-4658-bd2a-d5f002ccb8d4" containerName="registry-server" Feb 17 15:43:12.568052 master-0 kubenswrapper[26425]: I0217 15:43:12.568028 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.590300 master-0 kubenswrapper[26425]: I0217 15:43:12.590214 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p"] Feb 17 15:43:12.686655 master-0 kubenswrapper[26425]: I0217 15:43:12.686576 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.686866 master-0 kubenswrapper[26425]: I0217 15:43:12.686746 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vxxz\" (UniqueName: \"kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.686866 master-0 kubenswrapper[26425]: I0217 15:43:12.686766 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.788576 master-0 kubenswrapper[26425]: I0217 15:43:12.788440 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.788905 master-0 kubenswrapper[26425]: I0217 15:43:12.788668 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vxxz\" (UniqueName: \"kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.788905 master-0 kubenswrapper[26425]: I0217 15:43:12.788715 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.789084 master-0 kubenswrapper[26425]: I0217 15:43:12.789029 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.789512 master-0 kubenswrapper[26425]: I0217 15:43:12.789480 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.806066 master-0 kubenswrapper[26425]: I0217 15:43:12.806017 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vxxz\" (UniqueName: \"kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:12.885428 master-0 kubenswrapper[26425]: I0217 15:43:12.885327 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:13.405955 master-0 kubenswrapper[26425]: I0217 15:43:13.405709 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p"] Feb 17 15:43:13.408997 master-0 kubenswrapper[26425]: W0217 15:43:13.408938 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f70376f_fd21_4366_a475_88ded4ce3e2d.slice/crio-3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51 WatchSource:0}: Error finding container 3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51: Status 404 returned error can't find the container with id 3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51 Feb 17 15:43:14.225558 master-0 kubenswrapper[26425]: I0217 15:43:14.225493 26425 generic.go:334] "Generic (PLEG): container finished" podID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerID="b04374905f7480cede40abd8cb1f1000c25e9a00be377bd945ca5bf0e981d964" exitCode=0 Feb 17 15:43:14.225558 master-0 kubenswrapper[26425]: I0217 15:43:14.225556 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" event={"ID":"9f70376f-fd21-4366-a475-88ded4ce3e2d","Type":"ContainerDied","Data":"b04374905f7480cede40abd8cb1f1000c25e9a00be377bd945ca5bf0e981d964"} Feb 17 15:43:14.226222 master-0 kubenswrapper[26425]: I0217 15:43:14.225590 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" event={"ID":"9f70376f-fd21-4366-a475-88ded4ce3e2d","Type":"ContainerStarted","Data":"3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51"} Feb 17 15:43:15.237585 master-0 kubenswrapper[26425]: I0217 15:43:15.237183 26425 generic.go:334] "Generic (PLEG): container finished" podID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerID="5d368d297f874dc124dde46f319a107ae63a2ced192dbc729210b3e45b10008d" exitCode=0 Feb 17 15:43:15.237585 master-0 kubenswrapper[26425]: I0217 15:43:15.237294 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" event={"ID":"9f70376f-fd21-4366-a475-88ded4ce3e2d","Type":"ContainerDied","Data":"5d368d297f874dc124dde46f319a107ae63a2ced192dbc729210b3e45b10008d"} Feb 17 15:43:15.353266 master-0 kubenswrapper[26425]: E0217 15:43:15.353203 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f70376f_fd21_4366_a475_88ded4ce3e2d.slice/crio-5d368d297f874dc124dde46f319a107ae63a2ced192dbc729210b3e45b10008d.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:43:16.254570 master-0 kubenswrapper[26425]: I0217 15:43:16.254423 26425 generic.go:334] "Generic (PLEG): container finished" podID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerID="7921fc9811daf5dc8ac787783218e76313f0e35562d345d23eca2af2e2fdb4be" exitCode=0 Feb 17 15:43:16.255447 master-0 kubenswrapper[26425]: I0217 15:43:16.254530 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" event={"ID":"9f70376f-fd21-4366-a475-88ded4ce3e2d","Type":"ContainerDied","Data":"7921fc9811daf5dc8ac787783218e76313f0e35562d345d23eca2af2e2fdb4be"} Feb 17 15:43:17.741295 master-0 kubenswrapper[26425]: I0217 15:43:17.741208 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:17.897906 master-0 kubenswrapper[26425]: I0217 15:43:17.897794 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vxxz\" (UniqueName: \"kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz\") pod \"9f70376f-fd21-4366-a475-88ded4ce3e2d\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " Feb 17 15:43:17.898522 master-0 kubenswrapper[26425]: I0217 15:43:17.898078 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle\") pod \"9f70376f-fd21-4366-a475-88ded4ce3e2d\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " Feb 17 15:43:17.898522 master-0 kubenswrapper[26425]: I0217 15:43:17.898318 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util\") pod \"9f70376f-fd21-4366-a475-88ded4ce3e2d\" (UID: \"9f70376f-fd21-4366-a475-88ded4ce3e2d\") " Feb 17 15:43:17.899246 master-0 kubenswrapper[26425]: I0217 15:43:17.899172 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle" (OuterVolumeSpecName: "bundle") pod "9f70376f-fd21-4366-a475-88ded4ce3e2d" (UID: "9f70376f-fd21-4366-a475-88ded4ce3e2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:43:17.904741 master-0 kubenswrapper[26425]: I0217 15:43:17.904670 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz" (OuterVolumeSpecName: "kube-api-access-9vxxz") pod "9f70376f-fd21-4366-a475-88ded4ce3e2d" (UID: "9f70376f-fd21-4366-a475-88ded4ce3e2d"). InnerVolumeSpecName "kube-api-access-9vxxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:43:17.928998 master-0 kubenswrapper[26425]: I0217 15:43:17.928664 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util" (OuterVolumeSpecName: "util") pod "9f70376f-fd21-4366-a475-88ded4ce3e2d" (UID: "9f70376f-fd21-4366-a475-88ded4ce3e2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:43:18.002058 master-0 kubenswrapper[26425]: I0217 15:43:18.001967 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vxxz\" (UniqueName: \"kubernetes.io/projected/9f70376f-fd21-4366-a475-88ded4ce3e2d-kube-api-access-9vxxz\") on node \"master-0\" DevicePath \"\"" Feb 17 15:43:18.002058 master-0 kubenswrapper[26425]: I0217 15:43:18.002035 26425 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:43:18.002058 master-0 kubenswrapper[26425]: I0217 15:43:18.002058 26425 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f70376f-fd21-4366-a475-88ded4ce3e2d-util\") on node \"master-0\" DevicePath \"\"" Feb 17 15:43:18.287271 master-0 kubenswrapper[26425]: I0217 15:43:18.287035 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" event={"ID":"9f70376f-fd21-4366-a475-88ded4ce3e2d","Type":"ContainerDied","Data":"3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51"} Feb 17 15:43:18.287271 master-0 kubenswrapper[26425]: I0217 15:43:18.287103 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d282795e353300407fca0aa303c16de7117333f9f95e460a693673b2e488f51" Feb 17 15:43:18.287271 master-0 kubenswrapper[26425]: I0217 15:43:18.287124 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p" Feb 17 15:43:24.735777 master-0 kubenswrapper[26425]: I0217 15:43:24.735708 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt"] Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: E0217 15:43:24.736176 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="util" Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: I0217 15:43:24.736194 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="util" Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: E0217 15:43:24.736246 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="extract" Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: I0217 15:43:24.736255 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="extract" Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: E0217 15:43:24.736269 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="pull" Feb 17 15:43:24.736466 master-0 kubenswrapper[26425]: I0217 15:43:24.736277 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="pull" Feb 17 15:43:24.736691 master-0 kubenswrapper[26425]: I0217 15:43:24.736473 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f70376f-fd21-4366-a475-88ded4ce3e2d" containerName="extract" Feb 17 15:43:24.737259 master-0 kubenswrapper[26425]: I0217 15:43:24.737230 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:24.843121 master-0 kubenswrapper[26425]: I0217 15:43:24.843062 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2vc\" (UniqueName: \"kubernetes.io/projected/a4b761b0-59fc-474c-b1ad-186f84c8e0c2-kube-api-access-4f2vc\") pod \"openstack-operator-controller-init-7f8db498b4-66blt\" (UID: \"a4b761b0-59fc-474c-b1ad-186f84c8e0c2\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:24.944826 master-0 kubenswrapper[26425]: I0217 15:43:24.944741 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2vc\" (UniqueName: \"kubernetes.io/projected/a4b761b0-59fc-474c-b1ad-186f84c8e0c2-kube-api-access-4f2vc\") pod \"openstack-operator-controller-init-7f8db498b4-66blt\" (UID: \"a4b761b0-59fc-474c-b1ad-186f84c8e0c2\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:25.012260 master-0 kubenswrapper[26425]: I0217 15:43:25.012098 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt"] Feb 17 15:43:25.029103 master-0 kubenswrapper[26425]: I0217 15:43:25.029040 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2vc\" (UniqueName: \"kubernetes.io/projected/a4b761b0-59fc-474c-b1ad-186f84c8e0c2-kube-api-access-4f2vc\") pod \"openstack-operator-controller-init-7f8db498b4-66blt\" (UID: \"a4b761b0-59fc-474c-b1ad-186f84c8e0c2\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:25.056383 master-0 kubenswrapper[26425]: I0217 15:43:25.056307 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:25.563843 master-0 kubenswrapper[26425]: W0217 15:43:25.563347 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b761b0_59fc_474c_b1ad_186f84c8e0c2.slice/crio-ef32cf4f135685fe274533ae20f6f253145a5e0b5b39fc033a84cf06b2f0106e WatchSource:0}: Error finding container ef32cf4f135685fe274533ae20f6f253145a5e0b5b39fc033a84cf06b2f0106e: Status 404 returned error can't find the container with id ef32cf4f135685fe274533ae20f6f253145a5e0b5b39fc033a84cf06b2f0106e Feb 17 15:43:25.566790 master-0 kubenswrapper[26425]: I0217 15:43:25.566702 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:43:25.566944 master-0 kubenswrapper[26425]: I0217 15:43:25.566897 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt"] Feb 17 15:43:26.379050 master-0 kubenswrapper[26425]: I0217 15:43:26.378972 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" event={"ID":"a4b761b0-59fc-474c-b1ad-186f84c8e0c2","Type":"ContainerStarted","Data":"ef32cf4f135685fe274533ae20f6f253145a5e0b5b39fc033a84cf06b2f0106e"} Feb 17 15:43:30.435800 master-0 kubenswrapper[26425]: I0217 15:43:30.435378 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" event={"ID":"a4b761b0-59fc-474c-b1ad-186f84c8e0c2","Type":"ContainerStarted","Data":"de0f8d91cfed45bb84a093ec36571b2cc219fe40bf84eb4cbcabdc6cfbb2fbb4"} Feb 17 15:43:30.439083 master-0 kubenswrapper[26425]: I0217 15:43:30.435651 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:30.474520 master-0 kubenswrapper[26425]: I0217 15:43:30.471710 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" podStartSLOduration=2.262923353 podStartE2EDuration="6.471688102s" podCreationTimestamp="2026-02-17 15:43:24 +0000 UTC" firstStartedPulling="2026-02-17 15:43:25.566660796 +0000 UTC m=+1667.458384614" lastFinishedPulling="2026-02-17 15:43:29.775425545 +0000 UTC m=+1671.667149363" observedRunningTime="2026-02-17 15:43:30.469585532 +0000 UTC m=+1672.361309390" watchObservedRunningTime="2026-02-17 15:43:30.471688102 +0000 UTC m=+1672.363411950" Feb 17 15:43:35.060352 master-0 kubenswrapper[26425]: I0217 15:43:35.060271 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt" Feb 17 15:43:57.587289 master-0 kubenswrapper[26425]: I0217 15:43:57.586516 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8"] Feb 17 15:43:57.587951 master-0 kubenswrapper[26425]: I0217 15:43:57.587660 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:43:57.610499 master-0 kubenswrapper[26425]: I0217 15:43:57.606005 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd"] Feb 17 15:43:57.610499 master-0 kubenswrapper[26425]: I0217 15:43:57.607111 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:43:57.634106 master-0 kubenswrapper[26425]: I0217 15:43:57.632976 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8"] Feb 17 15:43:57.668266 master-0 kubenswrapper[26425]: I0217 15:43:57.668203 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f"] Feb 17 15:43:57.669608 master-0 kubenswrapper[26425]: I0217 15:43:57.669330 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:43:57.685707 master-0 kubenswrapper[26425]: I0217 15:43:57.685296 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlgr\" (UniqueName: \"kubernetes.io/projected/6148874d-2cc5-40f6-9adf-857f5c5a654c-kube-api-access-7zlgr\") pod \"barbican-operator-controller-manager-868647ff47-58dhd\" (UID: \"6148874d-2cc5-40f6-9adf-857f5c5a654c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:43:57.685707 master-0 kubenswrapper[26425]: I0217 15:43:57.685360 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lsp\" (UniqueName: \"kubernetes.io/projected/24902857-8de0-4a77-b2e0-e0d12b8b2f34-kube-api-access-m7lsp\") pod \"cinder-operator-controller-manager-5d946d989d-6mnh8\" (UID: \"24902857-8de0-4a77-b2e0-e0d12b8b2f34\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:43:57.692597 master-0 kubenswrapper[26425]: I0217 15:43:57.692516 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn"] Feb 17 15:43:57.693652 master-0 kubenswrapper[26425]: I0217 15:43:57.693616 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:43:57.700483 master-0 kubenswrapper[26425]: I0217 15:43:57.700417 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd"] Feb 17 15:43:57.714633 master-0 kubenswrapper[26425]: I0217 15:43:57.714577 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f"] Feb 17 15:43:57.737501 master-0 kubenswrapper[26425]: I0217 15:43:57.736742 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn"] Feb 17 15:43:57.760285 master-0 kubenswrapper[26425]: I0217 15:43:57.759898 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp"] Feb 17 15:43:57.761388 master-0 kubenswrapper[26425]: I0217 15:43:57.761366 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:43:57.785543 master-0 kubenswrapper[26425]: I0217 15:43:57.780867 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9"] Feb 17 15:43:57.792200 master-0 kubenswrapper[26425]: I0217 15:43:57.792135 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqzf\" (UniqueName: \"kubernetes.io/projected/320b61c5-b8bb-4c7c-a14d-77143d9523e6-kube-api-access-4mqzf\") pod \"designate-operator-controller-manager-6d8bf5c495-nn59f\" (UID: \"320b61c5-b8bb-4c7c-a14d-77143d9523e6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:43:57.792310 master-0 kubenswrapper[26425]: I0217 15:43:57.792242 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknqv\" (UniqueName: \"kubernetes.io/projected/5def741b-238b-47f4-a3cf-9dfd57b8b5b9-kube-api-access-lknqv\") pod \"glance-operator-controller-manager-77987464f4-sqmnn\" (UID: \"5def741b-238b-47f4-a3cf-9dfd57b8b5b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:43:57.792482 master-0 kubenswrapper[26425]: I0217 15:43:57.792440 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zlgr\" (UniqueName: \"kubernetes.io/projected/6148874d-2cc5-40f6-9adf-857f5c5a654c-kube-api-access-7zlgr\") pod \"barbican-operator-controller-manager-868647ff47-58dhd\" (UID: \"6148874d-2cc5-40f6-9adf-857f5c5a654c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:43:57.792557 master-0 kubenswrapper[26425]: I0217 15:43:57.792526 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7lsp\" (UniqueName: \"kubernetes.io/projected/24902857-8de0-4a77-b2e0-e0d12b8b2f34-kube-api-access-m7lsp\") pod \"cinder-operator-controller-manager-5d946d989d-6mnh8\" (UID: \"24902857-8de0-4a77-b2e0-e0d12b8b2f34\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:43:57.800289 master-0 kubenswrapper[26425]: I0217 15:43:57.800236 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp"] Feb 17 15:43:57.800434 master-0 kubenswrapper[26425]: I0217 15:43:57.800339 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:43:57.800434 master-0 kubenswrapper[26425]: I0217 15:43:57.800401 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9"] Feb 17 15:43:57.820241 master-0 kubenswrapper[26425]: I0217 15:43:57.820199 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zlgr\" (UniqueName: \"kubernetes.io/projected/6148874d-2cc5-40f6-9adf-857f5c5a654c-kube-api-access-7zlgr\") pod \"barbican-operator-controller-manager-868647ff47-58dhd\" (UID: \"6148874d-2cc5-40f6-9adf-857f5c5a654c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:43:57.836033 master-0 kubenswrapper[26425]: I0217 15:43:57.835988 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7lsp\" (UniqueName: \"kubernetes.io/projected/24902857-8de0-4a77-b2e0-e0d12b8b2f34-kube-api-access-m7lsp\") pod \"cinder-operator-controller-manager-5d946d989d-6mnh8\" (UID: \"24902857-8de0-4a77-b2e0-e0d12b8b2f34\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:43:57.837083 master-0 kubenswrapper[26425]: I0217 15:43:57.837050 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww"] Feb 17 15:43:57.847286 master-0 kubenswrapper[26425]: I0217 15:43:57.846959 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww"] Feb 17 15:43:57.847286 master-0 kubenswrapper[26425]: I0217 15:43:57.847051 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:57.849751 master-0 kubenswrapper[26425]: I0217 15:43:57.849705 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 15:43:57.863539 master-0 kubenswrapper[26425]: I0217 15:43:57.860175 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9"] Feb 17 15:43:57.863539 master-0 kubenswrapper[26425]: I0217 15:43:57.861616 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:43:57.868265 master-0 kubenswrapper[26425]: I0217 15:43:57.868219 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6"] Feb 17 15:43:57.869685 master-0 kubenswrapper[26425]: I0217 15:43:57.869648 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:43:57.883390 master-0 kubenswrapper[26425]: I0217 15:43:57.883331 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9"] Feb 17 15:43:57.895927 master-0 kubenswrapper[26425]: I0217 15:43:57.895876 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6"] Feb 17 15:43:57.899895 master-0 kubenswrapper[26425]: I0217 15:43:57.898441 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpql\" (UniqueName: \"kubernetes.io/projected/f46c3852-bd3e-454b-a65c-d1a206a51ed8-kube-api-access-9jpql\") pod \"horizon-operator-controller-manager-5b9b8895d5-2wdk9\" (UID: \"f46c3852-bd3e-454b-a65c-d1a206a51ed8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:43:57.899895 master-0 kubenswrapper[26425]: I0217 15:43:57.898534 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqzf\" (UniqueName: \"kubernetes.io/projected/320b61c5-b8bb-4c7c-a14d-77143d9523e6-kube-api-access-4mqzf\") pod \"designate-operator-controller-manager-6d8bf5c495-nn59f\" (UID: \"320b61c5-b8bb-4c7c-a14d-77143d9523e6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:43:57.899895 master-0 kubenswrapper[26425]: I0217 15:43:57.898575 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lknqv\" (UniqueName: \"kubernetes.io/projected/5def741b-238b-47f4-a3cf-9dfd57b8b5b9-kube-api-access-lknqv\") pod \"glance-operator-controller-manager-77987464f4-sqmnn\" (UID: \"5def741b-238b-47f4-a3cf-9dfd57b8b5b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:43:57.899895 master-0 kubenswrapper[26425]: I0217 15:43:57.898610 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk647\" (UniqueName: \"kubernetes.io/projected/2148911f-33d3-45d6-9441-98e6c9c0b0dc-kube-api-access-kk647\") pod \"heat-operator-controller-manager-69f49c598c-ngkpp\" (UID: \"2148911f-33d3-45d6-9441-98e6c9c0b0dc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:43:57.901187 master-0 kubenswrapper[26425]: I0217 15:43:57.901157 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p"] Feb 17 15:43:57.902502 master-0 kubenswrapper[26425]: I0217 15:43:57.902484 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:43:57.911181 master-0 kubenswrapper[26425]: I0217 15:43:57.910576 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p"] Feb 17 15:43:57.932975 master-0 kubenswrapper[26425]: I0217 15:43:57.932933 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqzf\" (UniqueName: \"kubernetes.io/projected/320b61c5-b8bb-4c7c-a14d-77143d9523e6-kube-api-access-4mqzf\") pod \"designate-operator-controller-manager-6d8bf5c495-nn59f\" (UID: \"320b61c5-b8bb-4c7c-a14d-77143d9523e6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:43:57.933669 master-0 kubenswrapper[26425]: I0217 15:43:57.933643 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:43:57.936060 master-0 kubenswrapper[26425]: I0217 15:43:57.935998 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn"] Feb 17 15:43:57.951491 master-0 kubenswrapper[26425]: I0217 15:43:57.942964 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:43:57.951491 master-0 kubenswrapper[26425]: I0217 15:43:57.948161 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:43:58.001439 master-0 kubenswrapper[26425]: I0217 15:43:58.001372 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwhjw\" (UniqueName: \"kubernetes.io/projected/a43562a4-5283-4089-94cc-af78066de5d9-kube-api-access-mwhjw\") pod \"ironic-operator-controller-manager-554564d7fc-x78p9\" (UID: \"a43562a4-5283-4089-94cc-af78066de5d9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:43:58.001811 master-0 kubenswrapper[26425]: I0217 15:43:58.001761 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.002924 master-0 kubenswrapper[26425]: I0217 15:43:58.002887 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jpql\" (UniqueName: \"kubernetes.io/projected/f46c3852-bd3e-454b-a65c-d1a206a51ed8-kube-api-access-9jpql\") pod \"horizon-operator-controller-manager-5b9b8895d5-2wdk9\" (UID: \"f46c3852-bd3e-454b-a65c-d1a206a51ed8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:43:58.003037 master-0 kubenswrapper[26425]: I0217 15:43:58.002995 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwcr\" (UniqueName: \"kubernetes.io/projected/9ca3a1eb-b468-4097-b2f7-08d3564d20cc-kube-api-access-qgwcr\") pod \"manila-operator-controller-manager-54f6768c69-fnw4p\" (UID: \"9ca3a1eb-b468-4097-b2f7-08d3564d20cc\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:43:58.003124 master-0 kubenswrapper[26425]: I0217 15:43:58.003096 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9657d\" (UniqueName: \"kubernetes.io/projected/d6435656-9d1f-4de0-bec9-62942d041759-kube-api-access-9657d\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.003162 master-0 kubenswrapper[26425]: I0217 15:43:58.003147 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk647\" (UniqueName: \"kubernetes.io/projected/2148911f-33d3-45d6-9441-98e6c9c0b0dc-kube-api-access-kk647\") pod \"heat-operator-controller-manager-69f49c598c-ngkpp\" (UID: \"2148911f-33d3-45d6-9441-98e6c9c0b0dc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:43:58.003220 master-0 kubenswrapper[26425]: I0217 15:43:58.003199 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m292\" (UniqueName: \"kubernetes.io/projected/fcd7732c-b9a6-48a5-bd36-8b51a9da2789-kube-api-access-6m292\") pod \"keystone-operator-controller-manager-b4d948c87-xnzn6\" (UID: \"fcd7732c-b9a6-48a5-bd36-8b51a9da2789\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:43:58.009134 master-0 kubenswrapper[26425]: I0217 15:43:58.009037 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lknqv\" (UniqueName: \"kubernetes.io/projected/5def741b-238b-47f4-a3cf-9dfd57b8b5b9-kube-api-access-lknqv\") pod \"glance-operator-controller-manager-77987464f4-sqmnn\" (UID: \"5def741b-238b-47f4-a3cf-9dfd57b8b5b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:43:58.011119 master-0 kubenswrapper[26425]: I0217 15:43:58.011079 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:43:58.044671 master-0 kubenswrapper[26425]: I0217 15:43:58.044479 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk647\" (UniqueName: \"kubernetes.io/projected/2148911f-33d3-45d6-9441-98e6c9c0b0dc-kube-api-access-kk647\") pod \"heat-operator-controller-manager-69f49c598c-ngkpp\" (UID: \"2148911f-33d3-45d6-9441-98e6c9c0b0dc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:43:58.051794 master-0 kubenswrapper[26425]: I0217 15:43:58.050723 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:43:58.051794 master-0 kubenswrapper[26425]: I0217 15:43:58.050790 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jpql\" (UniqueName: \"kubernetes.io/projected/f46c3852-bd3e-454b-a65c-d1a206a51ed8-kube-api-access-9jpql\") pod \"horizon-operator-controller-manager-5b9b8895d5-2wdk9\" (UID: \"f46c3852-bd3e-454b-a65c-d1a206a51ed8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:43:58.056894 master-0 kubenswrapper[26425]: I0217 15:43:58.056823 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr"] Feb 17 15:43:58.059392 master-0 kubenswrapper[26425]: I0217 15:43:58.059367 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:43:58.093744 master-0 kubenswrapper[26425]: I0217 15:43:58.093692 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2td54"] Feb 17 15:43:58.095221 master-0 kubenswrapper[26425]: I0217 15:43:58.095174 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106658 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9657d\" (UniqueName: \"kubernetes.io/projected/d6435656-9d1f-4de0-bec9-62942d041759-kube-api-access-9657d\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106746 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m292\" (UniqueName: \"kubernetes.io/projected/fcd7732c-b9a6-48a5-bd36-8b51a9da2789-kube-api-access-6m292\") pod \"keystone-operator-controller-manager-b4d948c87-xnzn6\" (UID: \"fcd7732c-b9a6-48a5-bd36-8b51a9da2789\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106841 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwhjw\" (UniqueName: \"kubernetes.io/projected/a43562a4-5283-4089-94cc-af78066de5d9-kube-api-access-mwhjw\") pod \"ironic-operator-controller-manager-554564d7fc-x78p9\" (UID: \"a43562a4-5283-4089-94cc-af78066de5d9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106900 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106928 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssvc\" (UniqueName: \"kubernetes.io/projected/ad03d25f-0d11-45ee-83ce-179f47fcd066-kube-api-access-9ssvc\") pod \"mariadb-operator-controller-manager-6994f66f48-dgqgn\" (UID: \"ad03d25f-0d11-45ee-83ce-179f47fcd066\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: I0217 15:43:58.106978 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgwcr\" (UniqueName: \"kubernetes.io/projected/9ca3a1eb-b468-4097-b2f7-08d3564d20cc-kube-api-access-qgwcr\") pod \"manila-operator-controller-manager-54f6768c69-fnw4p\" (UID: \"9ca3a1eb-b468-4097-b2f7-08d3564d20cc\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: E0217 15:43:58.107206 26425 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:58.109111 master-0 kubenswrapper[26425]: E0217 15:43:58.107264 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert podName:d6435656-9d1f-4de0-bec9-62942d041759 nodeName:}" failed. No retries permitted until 2026-02-17 15:43:58.607246491 +0000 UTC m=+1700.498970309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert") pod "infra-operator-controller-manager-5f879c76b6-2x4ww" (UID: "d6435656-9d1f-4de0-bec9-62942d041759") : secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:58.128560 master-0 kubenswrapper[26425]: I0217 15:43:58.122335 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:43:58.134079 master-0 kubenswrapper[26425]: I0217 15:43:58.134036 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m292\" (UniqueName: \"kubernetes.io/projected/fcd7732c-b9a6-48a5-bd36-8b51a9da2789-kube-api-access-6m292\") pod \"keystone-operator-controller-manager-b4d948c87-xnzn6\" (UID: \"fcd7732c-b9a6-48a5-bd36-8b51a9da2789\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:43:58.134079 master-0 kubenswrapper[26425]: I0217 15:43:58.134065 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9657d\" (UniqueName: \"kubernetes.io/projected/d6435656-9d1f-4de0-bec9-62942d041759-kube-api-access-9657d\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.135363 master-0 kubenswrapper[26425]: I0217 15:43:58.135327 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgwcr\" (UniqueName: \"kubernetes.io/projected/9ca3a1eb-b468-4097-b2f7-08d3564d20cc-kube-api-access-qgwcr\") pod \"manila-operator-controller-manager-54f6768c69-fnw4p\" (UID: \"9ca3a1eb-b468-4097-b2f7-08d3564d20cc\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:43:58.145306 master-0 kubenswrapper[26425]: I0217 15:43:58.140751 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:43:58.160936 master-0 kubenswrapper[26425]: I0217 15:43:58.157984 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr"] Feb 17 15:43:58.170539 master-0 kubenswrapper[26425]: I0217 15:43:58.164913 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwhjw\" (UniqueName: \"kubernetes.io/projected/a43562a4-5283-4089-94cc-af78066de5d9-kube-api-access-mwhjw\") pod \"ironic-operator-controller-manager-554564d7fc-x78p9\" (UID: \"a43562a4-5283-4089-94cc-af78066de5d9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:43:58.179217 master-0 kubenswrapper[26425]: I0217 15:43:58.171067 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn"] Feb 17 15:43:58.197826 master-0 kubenswrapper[26425]: I0217 15:43:58.197706 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2td54"] Feb 17 15:43:58.209496 master-0 kubenswrapper[26425]: I0217 15:43:58.204766 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67"] Feb 17 15:43:58.209496 master-0 kubenswrapper[26425]: I0217 15:43:58.205961 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:43:58.209848 master-0 kubenswrapper[26425]: I0217 15:43:58.209805 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssvc\" (UniqueName: \"kubernetes.io/projected/ad03d25f-0d11-45ee-83ce-179f47fcd066-kube-api-access-9ssvc\") pod \"mariadb-operator-controller-manager-6994f66f48-dgqgn\" (UID: \"ad03d25f-0d11-45ee-83ce-179f47fcd066\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:43:58.209982 master-0 kubenswrapper[26425]: I0217 15:43:58.209908 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfcc\" (UniqueName: \"kubernetes.io/projected/eb3829e1-cdae-40b9-8cdc-c4a17142b5fb-kube-api-access-vbfcc\") pod \"neutron-operator-controller-manager-64ddbf8bb-5mtgr\" (UID: \"eb3829e1-cdae-40b9-8cdc-c4a17142b5fb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:43:58.210038 master-0 kubenswrapper[26425]: I0217 15:43:58.210022 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldlp\" (UniqueName: \"kubernetes.io/projected/35f3e5eb-c70c-44c3-9a43-19202ba6c631-kube-api-access-mldlp\") pod \"nova-operator-controller-manager-567668f5cf-2td54\" (UID: \"35f3e5eb-c70c-44c3-9a43-19202ba6c631\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:43:58.220159 master-0 kubenswrapper[26425]: I0217 15:43:58.219717 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:43:58.224500 master-0 kubenswrapper[26425]: I0217 15:43:58.223521 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67"] Feb 17 15:43:58.229390 master-0 kubenswrapper[26425]: I0217 15:43:58.229341 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssvc\" (UniqueName: \"kubernetes.io/projected/ad03d25f-0d11-45ee-83ce-179f47fcd066-kube-api-access-9ssvc\") pod \"mariadb-operator-controller-manager-6994f66f48-dgqgn\" (UID: \"ad03d25f-0d11-45ee-83ce-179f47fcd066\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:43:58.258905 master-0 kubenswrapper[26425]: I0217 15:43:58.258848 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn"] Feb 17 15:43:58.260129 master-0 kubenswrapper[26425]: I0217 15:43:58.260105 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.262240 master-0 kubenswrapper[26425]: I0217 15:43:58.262210 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 15:43:58.266351 master-0 kubenswrapper[26425]: I0217 15:43:58.266315 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x"] Feb 17 15:43:58.267767 master-0 kubenswrapper[26425]: I0217 15:43:58.267739 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:43:58.273203 master-0 kubenswrapper[26425]: I0217 15:43:58.273153 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:43:58.285750 master-0 kubenswrapper[26425]: I0217 15:43:58.284917 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x"] Feb 17 15:43:58.291880 master-0 kubenswrapper[26425]: I0217 15:43:58.291816 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn"] Feb 17 15:43:58.312768 master-0 kubenswrapper[26425]: I0217 15:43:58.311601 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkks7\" (UniqueName: \"kubernetes.io/projected/231830cb-0e67-4056-bafd-2b5357344fac-kube-api-access-rkks7\") pod \"octavia-operator-controller-manager-69f8888797-6sx67\" (UID: \"231830cb-0e67-4056-bafd-2b5357344fac\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:43:58.312768 master-0 kubenswrapper[26425]: I0217 15:43:58.312513 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbfcc\" (UniqueName: \"kubernetes.io/projected/eb3829e1-cdae-40b9-8cdc-c4a17142b5fb-kube-api-access-vbfcc\") pod \"neutron-operator-controller-manager-64ddbf8bb-5mtgr\" (UID: \"eb3829e1-cdae-40b9-8cdc-c4a17142b5fb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:43:58.312768 master-0 kubenswrapper[26425]: I0217 15:43:58.312569 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldlp\" (UniqueName: \"kubernetes.io/projected/35f3e5eb-c70c-44c3-9a43-19202ba6c631-kube-api-access-mldlp\") pod \"nova-operator-controller-manager-567668f5cf-2td54\" (UID: \"35f3e5eb-c70c-44c3-9a43-19202ba6c631\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:43:58.320830 master-0 kubenswrapper[26425]: I0217 15:43:58.320784 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg"] Feb 17 15:43:58.321927 master-0 kubenswrapper[26425]: I0217 15:43:58.321895 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:43:58.350626 master-0 kubenswrapper[26425]: I0217 15:43:58.350549 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbfcc\" (UniqueName: \"kubernetes.io/projected/eb3829e1-cdae-40b9-8cdc-c4a17142b5fb-kube-api-access-vbfcc\") pod \"neutron-operator-controller-manager-64ddbf8bb-5mtgr\" (UID: \"eb3829e1-cdae-40b9-8cdc-c4a17142b5fb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:43:58.360915 master-0 kubenswrapper[26425]: I0217 15:43:58.357256 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldlp\" (UniqueName: \"kubernetes.io/projected/35f3e5eb-c70c-44c3-9a43-19202ba6c631-kube-api-access-mldlp\") pod \"nova-operator-controller-manager-567668f5cf-2td54\" (UID: \"35f3e5eb-c70c-44c3-9a43-19202ba6c631\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:43:58.360915 master-0 kubenswrapper[26425]: I0217 15:43:58.359520 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:43:58.360915 master-0 kubenswrapper[26425]: I0217 15:43:58.359967 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg"] Feb 17 15:43:58.377584 master-0 kubenswrapper[26425]: I0217 15:43:58.377433 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:43:58.385235 master-0 kubenswrapper[26425]: I0217 15:43:58.385116 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.395967 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.414378 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.414446 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkks7\" (UniqueName: \"kubernetes.io/projected/231830cb-0e67-4056-bafd-2b5357344fac-kube-api-access-rkks7\") pod \"octavia-operator-controller-manager-69f8888797-6sx67\" (UID: \"231830cb-0e67-4056-bafd-2b5357344fac\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.416341 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7wc\" (UniqueName: \"kubernetes.io/projected/4acfd1ef-33b8-4cef-a320-0813274a3d34-kube-api-access-vh7wc\") pod \"placement-operator-controller-manager-8497b45c89-dbcqg\" (UID: \"4acfd1ef-33b8-4cef-a320-0813274a3d34\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.416392 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjkh\" (UniqueName: \"kubernetes.io/projected/5865c6d5-ba38-4d97-9f7a-a9fc3d130b19-kube-api-access-skjkh\") pod \"ovn-operator-controller-manager-d44cf6b75-gwh4x\" (UID: \"5865c6d5-ba38-4d97-9f7a-a9fc3d130b19\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:43:58.416531 master-0 kubenswrapper[26425]: I0217 15:43:58.416420 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6jw\" (UniqueName: \"kubernetes.io/projected/ae75ffb2-1631-4a5d-af03-4421c2d000a1-kube-api-access-6j6jw\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.447648 master-0 kubenswrapper[26425]: I0217 15:43:58.437974 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkks7\" (UniqueName: \"kubernetes.io/projected/231830cb-0e67-4056-bafd-2b5357344fac-kube-api-access-rkks7\") pod \"octavia-operator-controller-manager-69f8888797-6sx67\" (UID: \"231830cb-0e67-4056-bafd-2b5357344fac\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:43:58.480414 master-0 kubenswrapper[26425]: I0217 15:43:58.480373 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zdksg"] Feb 17 15:43:58.482243 master-0 kubenswrapper[26425]: I0217 15:43:58.481689 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b"] Feb 17 15:43:58.483235 master-0 kubenswrapper[26425]: I0217 15:43:58.482563 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zdksg"] Feb 17 15:43:58.483235 master-0 kubenswrapper[26425]: I0217 15:43:58.482589 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b"] Feb 17 15:43:58.483235 master-0 kubenswrapper[26425]: I0217 15:43:58.482664 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:43:58.483235 master-0 kubenswrapper[26425]: I0217 15:43:58.482759 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:43:58.483235 master-0 kubenswrapper[26425]: I0217 15:43:58.482944 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2vx66"] Feb 17 15:43:58.484049 master-0 kubenswrapper[26425]: I0217 15:43:58.484029 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:43:58.495350 master-0 kubenswrapper[26425]: I0217 15:43:58.495327 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2vx66"] Feb 17 15:43:58.518278 master-0 kubenswrapper[26425]: I0217 15:43:58.518230 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7wc\" (UniqueName: \"kubernetes.io/projected/4acfd1ef-33b8-4cef-a320-0813274a3d34-kube-api-access-vh7wc\") pod \"placement-operator-controller-manager-8497b45c89-dbcqg\" (UID: \"4acfd1ef-33b8-4cef-a320-0813274a3d34\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:43:58.518372 master-0 kubenswrapper[26425]: I0217 15:43:58.518308 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjkh\" (UniqueName: \"kubernetes.io/projected/5865c6d5-ba38-4d97-9f7a-a9fc3d130b19-kube-api-access-skjkh\") pod \"ovn-operator-controller-manager-d44cf6b75-gwh4x\" (UID: \"5865c6d5-ba38-4d97-9f7a-a9fc3d130b19\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:43:58.518372 master-0 kubenswrapper[26425]: I0217 15:43:58.518349 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j6jw\" (UniqueName: \"kubernetes.io/projected/ae75ffb2-1631-4a5d-af03-4421c2d000a1-kube-api-access-6j6jw\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.518444 master-0 kubenswrapper[26425]: I0217 15:43:58.518425 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.520402 master-0 kubenswrapper[26425]: E0217 15:43:58.520379 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:43:58.520486 master-0 kubenswrapper[26425]: E0217 15:43:58.520428 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:43:59.020411651 +0000 UTC m=+1700.912135469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:43:58.531389 master-0 kubenswrapper[26425]: I0217 15:43:58.531331 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27"] Feb 17 15:43:58.538857 master-0 kubenswrapper[26425]: I0217 15:43:58.538821 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:43:58.542915 master-0 kubenswrapper[26425]: I0217 15:43:58.542881 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7wc\" (UniqueName: \"kubernetes.io/projected/4acfd1ef-33b8-4cef-a320-0813274a3d34-kube-api-access-vh7wc\") pod \"placement-operator-controller-manager-8497b45c89-dbcqg\" (UID: \"4acfd1ef-33b8-4cef-a320-0813274a3d34\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:43:58.549671 master-0 kubenswrapper[26425]: I0217 15:43:58.549628 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjkh\" (UniqueName: \"kubernetes.io/projected/5865c6d5-ba38-4d97-9f7a-a9fc3d130b19-kube-api-access-skjkh\") pod \"ovn-operator-controller-manager-d44cf6b75-gwh4x\" (UID: \"5865c6d5-ba38-4d97-9f7a-a9fc3d130b19\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:43:58.562137 master-0 kubenswrapper[26425]: I0217 15:43:58.560132 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j6jw\" (UniqueName: \"kubernetes.io/projected/ae75ffb2-1631-4a5d-af03-4421c2d000a1-kube-api-access-6j6jw\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:58.564816 master-0 kubenswrapper[26425]: I0217 15:43:58.564535 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27"] Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621043 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsq8\" (UniqueName: \"kubernetes.io/projected/f2231eca-08d0-4ab0-8b61-e2f73aca05f5-kube-api-access-tmsq8\") pod \"watcher-operator-controller-manager-5db88f68c-ctk27\" (UID: \"f2231eca-08d0-4ab0-8b61-e2f73aca05f5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621107 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621147 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl"] Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: E0217 15:43:58.621264 26425 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: E0217 15:43:58.621307 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert podName:d6435656-9d1f-4de0-bec9-62942d041759 nodeName:}" failed. No retries permitted until 2026-02-17 15:43:59.62129151 +0000 UTC m=+1701.513015328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert") pod "infra-operator-controller-manager-5f879c76b6-2x4ww" (UID: "d6435656-9d1f-4de0-bec9-62942d041759") : secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621164 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92wp\" (UniqueName: \"kubernetes.io/projected/9dda403e-9fcc-4390-9d5f-ffc7c7c6b439-kube-api-access-n92wp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wk82b\" (UID: \"9dda403e-9fcc-4390-9d5f-ffc7c7c6b439\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621753 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdl8j\" (UniqueName: \"kubernetes.io/projected/f525513f-ac8e-4d1a-b2ba-24217e0e642f-kube-api-access-xdl8j\") pod \"swift-operator-controller-manager-68f46476f-zdksg\" (UID: \"f525513f-ac8e-4d1a-b2ba-24217e0e642f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.621961 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92rn\" (UniqueName: \"kubernetes.io/projected/705adb1b-fbc0-40c4-a0e2-6bbe555516f5-kube-api-access-g92rn\") pod \"test-operator-controller-manager-7866795846-2vx66\" (UID: \"705adb1b-fbc0-40c4-a0e2-6bbe555516f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:43:58.623134 master-0 kubenswrapper[26425]: I0217 15:43:58.622612 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.626094 master-0 kubenswrapper[26425]: I0217 15:43:58.626068 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 15:43:58.626264 master-0 kubenswrapper[26425]: I0217 15:43:58.626248 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 15:43:58.640215 master-0 kubenswrapper[26425]: I0217 15:43:58.635985 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl"] Feb 17 15:43:58.652133 master-0 kubenswrapper[26425]: I0217 15:43:58.652078 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5"] Feb 17 15:43:58.660693 master-0 kubenswrapper[26425]: I0217 15:43:58.660640 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5"] Feb 17 15:43:58.662025 master-0 kubenswrapper[26425]: I0217 15:43:58.661980 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723627 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723670 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92rn\" (UniqueName: \"kubernetes.io/projected/705adb1b-fbc0-40c4-a0e2-6bbe555516f5-kube-api-access-g92rn\") pod \"test-operator-controller-manager-7866795846-2vx66\" (UID: \"705adb1b-fbc0-40c4-a0e2-6bbe555516f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723700 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72twg\" (UniqueName: \"kubernetes.io/projected/aa6a2998-eacc-4bc5-b73c-677087888726-kube-api-access-72twg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hqlr5\" (UID: \"aa6a2998-eacc-4bc5-b73c-677087888726\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723754 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsq8\" (UniqueName: \"kubernetes.io/projected/f2231eca-08d0-4ab0-8b61-e2f73aca05f5-kube-api-access-tmsq8\") pod \"watcher-operator-controller-manager-5db88f68c-ctk27\" (UID: \"f2231eca-08d0-4ab0-8b61-e2f73aca05f5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723814 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92wp\" (UniqueName: \"kubernetes.io/projected/9dda403e-9fcc-4390-9d5f-ffc7c7c6b439-kube-api-access-n92wp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wk82b\" (UID: \"9dda403e-9fcc-4390-9d5f-ffc7c7c6b439\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723848 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdl8j\" (UniqueName: \"kubernetes.io/projected/f525513f-ac8e-4d1a-b2ba-24217e0e642f-kube-api-access-xdl8j\") pod \"swift-operator-controller-manager-68f46476f-zdksg\" (UID: \"f525513f-ac8e-4d1a-b2ba-24217e0e642f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723890 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnt5\" (UniqueName: \"kubernetes.io/projected/eb510143-0788-4676-91db-626e861a0b5c-kube-api-access-dsnt5\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.724666 master-0 kubenswrapper[26425]: I0217 15:43:58.723920 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.730848 master-0 kubenswrapper[26425]: I0217 15:43:58.730802 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:43:58.741785 master-0 kubenswrapper[26425]: I0217 15:43:58.741746 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsq8\" (UniqueName: \"kubernetes.io/projected/f2231eca-08d0-4ab0-8b61-e2f73aca05f5-kube-api-access-tmsq8\") pod \"watcher-operator-controller-manager-5db88f68c-ctk27\" (UID: \"f2231eca-08d0-4ab0-8b61-e2f73aca05f5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:43:58.744095 master-0 kubenswrapper[26425]: I0217 15:43:58.744062 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92rn\" (UniqueName: \"kubernetes.io/projected/705adb1b-fbc0-40c4-a0e2-6bbe555516f5-kube-api-access-g92rn\") pod \"test-operator-controller-manager-7866795846-2vx66\" (UID: \"705adb1b-fbc0-40c4-a0e2-6bbe555516f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:43:58.745537 master-0 kubenswrapper[26425]: I0217 15:43:58.744723 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdl8j\" (UniqueName: \"kubernetes.io/projected/f525513f-ac8e-4d1a-b2ba-24217e0e642f-kube-api-access-xdl8j\") pod \"swift-operator-controller-manager-68f46476f-zdksg\" (UID: \"f525513f-ac8e-4d1a-b2ba-24217e0e642f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:43:58.745537 master-0 kubenswrapper[26425]: I0217 15:43:58.745404 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92wp\" (UniqueName: \"kubernetes.io/projected/9dda403e-9fcc-4390-9d5f-ffc7c7c6b439-kube-api-access-n92wp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wk82b\" (UID: \"9dda403e-9fcc-4390-9d5f-ffc7c7c6b439\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:43:58.785596 master-0 kubenswrapper[26425]: I0217 15:43:58.785546 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:43:58.796997 master-0 kubenswrapper[26425]: I0217 15:43:58.795913 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:43:58.815700 master-0 kubenswrapper[26425]: I0217 15:43:58.815318 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:43:58.825854 master-0 kubenswrapper[26425]: I0217 15:43:58.825802 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnt5\" (UniqueName: \"kubernetes.io/projected/eb510143-0788-4676-91db-626e861a0b5c-kube-api-access-dsnt5\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.825972 master-0 kubenswrapper[26425]: I0217 15:43:58.825870 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.825972 master-0 kubenswrapper[26425]: I0217 15:43:58.825949 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.826055 master-0 kubenswrapper[26425]: I0217 15:43:58.825990 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72twg\" (UniqueName: \"kubernetes.io/projected/aa6a2998-eacc-4bc5-b73c-677087888726-kube-api-access-72twg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hqlr5\" (UID: \"aa6a2998-eacc-4bc5-b73c-677087888726\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" Feb 17 15:43:58.827957 master-0 kubenswrapper[26425]: E0217 15:43:58.827901 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:43:58.828087 master-0 kubenswrapper[26425]: E0217 15:43:58.828058 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:43:59.328009979 +0000 UTC m=+1701.219733887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:43:58.828631 master-0 kubenswrapper[26425]: E0217 15:43:58.828576 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:43:58.828716 master-0 kubenswrapper[26425]: E0217 15:43:58.828684 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:43:59.328660834 +0000 UTC m=+1701.220384732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:43:58.852105 master-0 kubenswrapper[26425]: I0217 15:43:58.851297 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:43:58.865572 master-0 kubenswrapper[26425]: I0217 15:43:58.865501 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72twg\" (UniqueName: \"kubernetes.io/projected/aa6a2998-eacc-4bc5-b73c-677087888726-kube-api-access-72twg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hqlr5\" (UID: \"aa6a2998-eacc-4bc5-b73c-677087888726\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" Feb 17 15:43:58.867729 master-0 kubenswrapper[26425]: I0217 15:43:58.867690 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnt5\" (UniqueName: \"kubernetes.io/projected/eb510143-0788-4676-91db-626e861a0b5c-kube-api-access-dsnt5\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:58.892328 master-0 kubenswrapper[26425]: I0217 15:43:58.892265 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:43:58.908413 master-0 kubenswrapper[26425]: I0217 15:43:58.908362 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:43:58.951808 master-0 kubenswrapper[26425]: I0217 15:43:58.949710 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f"] Feb 17 15:43:58.968720 master-0 kubenswrapper[26425]: I0217 15:43:58.968553 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd"] Feb 17 15:43:59.021279 master-0 kubenswrapper[26425]: I0217 15:43:59.020282 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" Feb 17 15:43:59.033866 master-0 kubenswrapper[26425]: I0217 15:43:59.033445 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8"] Feb 17 15:43:59.037179 master-0 kubenswrapper[26425]: I0217 15:43:59.036617 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:43:59.037179 master-0 kubenswrapper[26425]: E0217 15:43:59.036863 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:43:59.037179 master-0 kubenswrapper[26425]: E0217 15:43:59.036951 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:00.03693104 +0000 UTC m=+1701.928654858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:43:59.342022 master-0 kubenswrapper[26425]: I0217 15:43:59.341930 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:59.342367 master-0 kubenswrapper[26425]: E0217 15:43:59.342223 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:43:59.342367 master-0 kubenswrapper[26425]: E0217 15:43:59.342351 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:00.342310224 +0000 UTC m=+1702.234034082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:43:59.342614 master-0 kubenswrapper[26425]: I0217 15:43:59.342565 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:43:59.342845 master-0 kubenswrapper[26425]: E0217 15:43:59.342798 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:43:59.342923 master-0 kubenswrapper[26425]: E0217 15:43:59.342884 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:00.342856147 +0000 UTC m=+1702.234580015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:43:59.609582 master-0 kubenswrapper[26425]: I0217 15:43:59.608605 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p"] Feb 17 15:43:59.610394 master-0 kubenswrapper[26425]: W0217 15:43:59.610343 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5def741b_238b_47f4_a3cf_9dfd57b8b5b9.slice/crio-f1298f96a7fb20889488d0347c4d81bbc53b70eac2e24eece3fcf5521ea97a41 WatchSource:0}: Error finding container f1298f96a7fb20889488d0347c4d81bbc53b70eac2e24eece3fcf5521ea97a41: Status 404 returned error can't find the container with id f1298f96a7fb20889488d0347c4d81bbc53b70eac2e24eece3fcf5521ea97a41 Feb 17 15:43:59.616916 master-0 kubenswrapper[26425]: I0217 15:43:59.616866 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn"] Feb 17 15:43:59.622643 master-0 kubenswrapper[26425]: W0217 15:43:59.622597 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2148911f_33d3_45d6_9441_98e6c9c0b0dc.slice/crio-e941175d6fd6599bc326ef76f7ff412c67160df46dd60e76a9ca43c98fc9f219 WatchSource:0}: Error finding container e941175d6fd6599bc326ef76f7ff412c67160df46dd60e76a9ca43c98fc9f219: Status 404 returned error can't find the container with id e941175d6fd6599bc326ef76f7ff412c67160df46dd60e76a9ca43c98fc9f219 Feb 17 15:43:59.626060 master-0 kubenswrapper[26425]: I0217 15:43:59.626008 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9"] Feb 17 15:43:59.632735 master-0 kubenswrapper[26425]: W0217 15:43:59.632684 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad03d25f_0d11_45ee_83ce_179f47fcd066.slice/crio-adc8663dc217d3a3e483eac4505b4cc1de67c9c59d0c63c903de49a5f20bfe8d WatchSource:0}: Error finding container adc8663dc217d3a3e483eac4505b4cc1de67c9c59d0c63c903de49a5f20bfe8d: Status 404 returned error can't find the container with id adc8663dc217d3a3e483eac4505b4cc1de67c9c59d0c63c903de49a5f20bfe8d Feb 17 15:43:59.643723 master-0 kubenswrapper[26425]: I0217 15:43:59.641565 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9"] Feb 17 15:43:59.654243 master-0 kubenswrapper[26425]: I0217 15:43:59.653979 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp"] Feb 17 15:43:59.657327 master-0 kubenswrapper[26425]: I0217 15:43:59.657209 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:43:59.657517 master-0 kubenswrapper[26425]: E0217 15:43:59.657373 26425 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:59.657517 master-0 kubenswrapper[26425]: E0217 15:43:59.657440 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert podName:d6435656-9d1f-4de0-bec9-62942d041759 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:01.657417883 +0000 UTC m=+1703.549141711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert") pod "infra-operator-controller-manager-5f879c76b6-2x4ww" (UID: "d6435656-9d1f-4de0-bec9-62942d041759") : secret "infra-operator-webhook-server-cert" not found Feb 17 15:43:59.658030 master-0 kubenswrapper[26425]: I0217 15:43:59.657784 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn"] Feb 17 15:43:59.790510 master-0 kubenswrapper[26425]: I0217 15:43:59.790403 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" event={"ID":"5def741b-238b-47f4-a3cf-9dfd57b8b5b9","Type":"ContainerStarted","Data":"f1298f96a7fb20889488d0347c4d81bbc53b70eac2e24eece3fcf5521ea97a41"} Feb 17 15:43:59.796413 master-0 kubenswrapper[26425]: I0217 15:43:59.796311 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" event={"ID":"a43562a4-5283-4089-94cc-af78066de5d9","Type":"ContainerStarted","Data":"dfa9b3b42665bf4fb794f593ff12845cc9a8cfa28a7bf1ba6b563b418ec0417e"} Feb 17 15:43:59.798049 master-0 kubenswrapper[26425]: I0217 15:43:59.798008 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" event={"ID":"ad03d25f-0d11-45ee-83ce-179f47fcd066","Type":"ContainerStarted","Data":"adc8663dc217d3a3e483eac4505b4cc1de67c9c59d0c63c903de49a5f20bfe8d"} Feb 17 15:43:59.799484 master-0 kubenswrapper[26425]: I0217 15:43:59.799444 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" event={"ID":"f46c3852-bd3e-454b-a65c-d1a206a51ed8","Type":"ContainerStarted","Data":"dead0c22e40857c340d65671dbcc3249995904c5a9aca5e42f16e38ade72a49f"} Feb 17 15:43:59.801403 master-0 kubenswrapper[26425]: I0217 15:43:59.801367 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" event={"ID":"2148911f-33d3-45d6-9441-98e6c9c0b0dc","Type":"ContainerStarted","Data":"e941175d6fd6599bc326ef76f7ff412c67160df46dd60e76a9ca43c98fc9f219"} Feb 17 15:43:59.803471 master-0 kubenswrapper[26425]: I0217 15:43:59.803347 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" event={"ID":"6148874d-2cc5-40f6-9adf-857f5c5a654c","Type":"ContainerStarted","Data":"5d3e103d4d8f8fec371d068363dcdd8a9b429b2fe91a61451b73dcfa1d9671b1"} Feb 17 15:43:59.805588 master-0 kubenswrapper[26425]: I0217 15:43:59.805538 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" event={"ID":"9ca3a1eb-b468-4097-b2f7-08d3564d20cc","Type":"ContainerStarted","Data":"2cb72f638ddf68f78a4aad9aec42202332e3896bc0dd62d4130b5b881aa5e7c8"} Feb 17 15:43:59.809104 master-0 kubenswrapper[26425]: I0217 15:43:59.809056 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" event={"ID":"320b61c5-b8bb-4c7c-a14d-77143d9523e6","Type":"ContainerStarted","Data":"ce434a2cde3167b2714db06d90083af39013c3c52085e35fa0ab9b915c6c1204"} Feb 17 15:43:59.811249 master-0 kubenswrapper[26425]: I0217 15:43:59.811204 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" event={"ID":"24902857-8de0-4a77-b2e0-e0d12b8b2f34","Type":"ContainerStarted","Data":"dd5a8984b64457800a7548f94787b95630472a9d70e710005a085b55f7817a3a"} Feb 17 15:43:59.935785 master-0 kubenswrapper[26425]: W0217 15:43:59.935731 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd7732c_b9a6_48a5_bd36_8b51a9da2789.slice/crio-0de032119e304470445f7787f66c1e58529ed61c745d9e4ea7024cd4aa3f7fb0 WatchSource:0}: Error finding container 0de032119e304470445f7787f66c1e58529ed61c745d9e4ea7024cd4aa3f7fb0: Status 404 returned error can't find the container with id 0de032119e304470445f7787f66c1e58529ed61c745d9e4ea7024cd4aa3f7fb0 Feb 17 15:43:59.935922 master-0 kubenswrapper[26425]: I0217 15:43:59.935865 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2td54"] Feb 17 15:43:59.945389 master-0 kubenswrapper[26425]: W0217 15:43:59.945333 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb3829e1_cdae_40b9_8cdc_c4a17142b5fb.slice/crio-9a58974095b8a81407899cd042ce7a7edfdd5c28f5abda01d5a1a644375e1bad WatchSource:0}: Error finding container 9a58974095b8a81407899cd042ce7a7edfdd5c28f5abda01d5a1a644375e1bad: Status 404 returned error can't find the container with id 9a58974095b8a81407899cd042ce7a7edfdd5c28f5abda01d5a1a644375e1bad Feb 17 15:43:59.948327 master-0 kubenswrapper[26425]: I0217 15:43:59.948293 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr"] Feb 17 15:43:59.957127 master-0 kubenswrapper[26425]: I0217 15:43:59.956555 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6"] Feb 17 15:44:00.084631 master-0 kubenswrapper[26425]: I0217 15:44:00.084585 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:00.084892 master-0 kubenswrapper[26425]: E0217 15:44:00.084759 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:00.084892 master-0 kubenswrapper[26425]: E0217 15:44:00.084847 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:02.084823226 +0000 UTC m=+1703.976547044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: I0217 15:44:00.392389 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: I0217 15:44:00.392527 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: E0217 15:44:00.392720 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: E0217 15:44:00.392789 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:02.392758312 +0000 UTC m=+1704.284482130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: E0217 15:44:00.393376 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:44:00.395722 master-0 kubenswrapper[26425]: E0217 15:44:00.393423 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:02.393402708 +0000 UTC m=+1704.285126526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:44:00.477650 master-0 kubenswrapper[26425]: I0217 15:44:00.477606 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x"] Feb 17 15:44:00.478055 master-0 kubenswrapper[26425]: I0217 15:44:00.478044 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27"] Feb 17 15:44:00.478185 master-0 kubenswrapper[26425]: I0217 15:44:00.478172 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b"] Feb 17 15:44:00.478251 master-0 kubenswrapper[26425]: I0217 15:44:00.478241 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5"] Feb 17 15:44:00.478336 master-0 kubenswrapper[26425]: I0217 15:44:00.478325 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zdksg"] Feb 17 15:44:00.492486 master-0 kubenswrapper[26425]: I0217 15:44:00.488620 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2vx66"] Feb 17 15:44:00.499523 master-0 kubenswrapper[26425]: I0217 15:44:00.496700 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67"] Feb 17 15:44:00.504722 master-0 kubenswrapper[26425]: I0217 15:44:00.504671 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg"] Feb 17 15:44:00.517641 master-0 kubenswrapper[26425]: E0217 15:44:00.513439 26425 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skjkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-gwh4x_openstack-operators(5865c6d5-ba38-4d97-9f7a-a9fc3d130b19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 15:44:00.517641 master-0 kubenswrapper[26425]: E0217 15:44:00.513896 26425 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g92rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-2vx66_openstack-operators(705adb1b-fbc0-40c4-a0e2-6bbe555516f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 15:44:00.517641 master-0 kubenswrapper[26425]: E0217 15:44:00.514886 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" podUID="5865c6d5-ba38-4d97-9f7a-a9fc3d130b19" Feb 17 15:44:00.517641 master-0 kubenswrapper[26425]: E0217 15:44:00.516225 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" podUID="705adb1b-fbc0-40c4-a0e2-6bbe555516f5" Feb 17 15:44:00.545755 master-0 kubenswrapper[26425]: E0217 15:44:00.545273 26425 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vh7wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-dbcqg_openstack-operators(4acfd1ef-33b8-4cef-a320-0813274a3d34): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 15:44:00.550003 master-0 kubenswrapper[26425]: E0217 15:44:00.546536 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" podUID="4acfd1ef-33b8-4cef-a320-0813274a3d34" Feb 17 15:44:00.835411 master-0 kubenswrapper[26425]: I0217 15:44:00.835320 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" event={"ID":"9dda403e-9fcc-4390-9d5f-ffc7c7c6b439","Type":"ContainerStarted","Data":"0a99c6843392d2340a9b29550c708a63a5fce2aab02eb8db3285176f8aca7a1e"} Feb 17 15:44:00.837793 master-0 kubenswrapper[26425]: I0217 15:44:00.837752 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" event={"ID":"f525513f-ac8e-4d1a-b2ba-24217e0e642f","Type":"ContainerStarted","Data":"f6fff551abb02b500e33ec54ddf778257d94c40aa6856d23e11b0f13aee0a339"} Feb 17 15:44:00.840156 master-0 kubenswrapper[26425]: I0217 15:44:00.840115 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" event={"ID":"4acfd1ef-33b8-4cef-a320-0813274a3d34","Type":"ContainerStarted","Data":"59041b9f153e3828a93e9a60bc3e7c11dd7379528feab59ceed7ddb032552999"} Feb 17 15:44:00.842710 master-0 kubenswrapper[26425]: E0217 15:44:00.842648 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" podUID="4acfd1ef-33b8-4cef-a320-0813274a3d34" Feb 17 15:44:00.846934 master-0 kubenswrapper[26425]: I0217 15:44:00.846834 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" event={"ID":"f2231eca-08d0-4ab0-8b61-e2f73aca05f5","Type":"ContainerStarted","Data":"39f59b106eb2cbda0c54f7b4781c55ae500d7c2ac7a954cd417bf16119ef17b3"} Feb 17 15:44:00.849491 master-0 kubenswrapper[26425]: I0217 15:44:00.849407 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" event={"ID":"eb3829e1-cdae-40b9-8cdc-c4a17142b5fb","Type":"ContainerStarted","Data":"9a58974095b8a81407899cd042ce7a7edfdd5c28f5abda01d5a1a644375e1bad"} Feb 17 15:44:00.853808 master-0 kubenswrapper[26425]: I0217 15:44:00.851078 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" event={"ID":"5865c6d5-ba38-4d97-9f7a-a9fc3d130b19","Type":"ContainerStarted","Data":"63c11eb4818523db8974eb0cc0bdd6bf972c8c1e1c653482f912b5af19af45d7"} Feb 17 15:44:00.855124 master-0 kubenswrapper[26425]: E0217 15:44:00.855051 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" podUID="5865c6d5-ba38-4d97-9f7a-a9fc3d130b19" Feb 17 15:44:00.857726 master-0 kubenswrapper[26425]: I0217 15:44:00.855105 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" event={"ID":"35f3e5eb-c70c-44c3-9a43-19202ba6c631","Type":"ContainerStarted","Data":"bf557f023095f1d3b62f289be4048aface55d83aabd0b5c489d748bc4e1fb7db"} Feb 17 15:44:00.861135 master-0 kubenswrapper[26425]: I0217 15:44:00.861087 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" event={"ID":"231830cb-0e67-4056-bafd-2b5357344fac","Type":"ContainerStarted","Data":"b7942e277535c6a019d7f4891dd4bfdbcceb6f38beeb169259dc860ec118c5cc"} Feb 17 15:44:00.865372 master-0 kubenswrapper[26425]: I0217 15:44:00.865327 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" event={"ID":"fcd7732c-b9a6-48a5-bd36-8b51a9da2789","Type":"ContainerStarted","Data":"0de032119e304470445f7787f66c1e58529ed61c745d9e4ea7024cd4aa3f7fb0"} Feb 17 15:44:00.867393 master-0 kubenswrapper[26425]: I0217 15:44:00.867341 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" event={"ID":"aa6a2998-eacc-4bc5-b73c-677087888726","Type":"ContainerStarted","Data":"eedeec4aaf545845910b8ab1914eb6b972a6441fff3a1b184c6506eeaa7b34ec"} Feb 17 15:44:00.869475 master-0 kubenswrapper[26425]: I0217 15:44:00.869416 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" event={"ID":"705adb1b-fbc0-40c4-a0e2-6bbe555516f5","Type":"ContainerStarted","Data":"e9568e7b9e2c9ce281ce1aaba1c226f465f9f6a708b1fd7efba90b74d071e22a"} Feb 17 15:44:00.871478 master-0 kubenswrapper[26425]: E0217 15:44:00.871419 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" podUID="705adb1b-fbc0-40c4-a0e2-6bbe555516f5" Feb 17 15:44:01.730113 master-0 kubenswrapper[26425]: I0217 15:44:01.730051 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:01.730315 master-0 kubenswrapper[26425]: E0217 15:44:01.730271 26425 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 15:44:01.730357 master-0 kubenswrapper[26425]: E0217 15:44:01.730329 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert podName:d6435656-9d1f-4de0-bec9-62942d041759 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:05.730309963 +0000 UTC m=+1707.622033781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert") pod "infra-operator-controller-manager-5f879c76b6-2x4ww" (UID: "d6435656-9d1f-4de0-bec9-62942d041759") : secret "infra-operator-webhook-server-cert" not found Feb 17 15:44:01.882506 master-0 kubenswrapper[26425]: E0217 15:44:01.881783 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" podUID="705adb1b-fbc0-40c4-a0e2-6bbe555516f5" Feb 17 15:44:01.886122 master-0 kubenswrapper[26425]: E0217 15:44:01.885388 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" podUID="5865c6d5-ba38-4d97-9f7a-a9fc3d130b19" Feb 17 15:44:01.886122 master-0 kubenswrapper[26425]: E0217 15:44:01.885531 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" podUID="4acfd1ef-33b8-4cef-a320-0813274a3d34" Feb 17 15:44:02.140961 master-0 kubenswrapper[26425]: I0217 15:44:02.140903 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:02.141186 master-0 kubenswrapper[26425]: E0217 15:44:02.141107 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:02.141253 master-0 kubenswrapper[26425]: E0217 15:44:02.141204 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:06.141179139 +0000 UTC m=+1708.032902957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:02.447052 master-0 kubenswrapper[26425]: I0217 15:44:02.446921 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:02.449896 master-0 kubenswrapper[26425]: I0217 15:44:02.449843 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:02.449987 master-0 kubenswrapper[26425]: E0217 15:44:02.449920 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:44:02.450033 master-0 kubenswrapper[26425]: E0217 15:44:02.449990 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:06.449970595 +0000 UTC m=+1708.341694413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:44:02.450469 master-0 kubenswrapper[26425]: E0217 15:44:02.450434 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:44:02.450548 master-0 kubenswrapper[26425]: E0217 15:44:02.450494 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:06.450482718 +0000 UTC m=+1708.342206646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:44:05.809312 master-0 kubenswrapper[26425]: I0217 15:44:05.809220 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:05.810021 master-0 kubenswrapper[26425]: E0217 15:44:05.809397 26425 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 15:44:05.810021 master-0 kubenswrapper[26425]: E0217 15:44:05.809448 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert podName:d6435656-9d1f-4de0-bec9-62942d041759 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:13.809435761 +0000 UTC m=+1715.701159579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert") pod "infra-operator-controller-manager-5f879c76b6-2x4ww" (UID: "d6435656-9d1f-4de0-bec9-62942d041759") : secret "infra-operator-webhook-server-cert" not found Feb 17 15:44:06.221903 master-0 kubenswrapper[26425]: I0217 15:44:06.221821 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:06.222281 master-0 kubenswrapper[26425]: E0217 15:44:06.222248 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:06.222362 master-0 kubenswrapper[26425]: E0217 15:44:06.222343 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:14.222315314 +0000 UTC m=+1716.114039142 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: I0217 15:44:06.526786 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: I0217 15:44:06.526880 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: E0217 15:44:06.527093 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: E0217 15:44:06.527211 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:14.527180235 +0000 UTC m=+1716.418904083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: E0217 15:44:06.527282 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:44:06.527894 master-0 kubenswrapper[26425]: E0217 15:44:06.527369 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:14.52734066 +0000 UTC m=+1716.419064558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:44:13.897167 master-0 kubenswrapper[26425]: I0217 15:44:13.897089 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:13.901581 master-0 kubenswrapper[26425]: I0217 15:44:13.901530 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6435656-9d1f-4de0-bec9-62942d041759-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2x4ww\" (UID: \"d6435656-9d1f-4de0-bec9-62942d041759\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:13.936490 master-0 kubenswrapper[26425]: I0217 15:44:13.936422 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:14.303068 master-0 kubenswrapper[26425]: I0217 15:44:14.302919 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:14.303395 master-0 kubenswrapper[26425]: E0217 15:44:14.303133 26425 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:14.303395 master-0 kubenswrapper[26425]: E0217 15:44:14.303222 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert podName:ae75ffb2-1631-4a5d-af03-4421c2d000a1 nodeName:}" failed. No retries permitted until 2026-02-17 15:44:30.303201323 +0000 UTC m=+1732.194925141 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" (UID: "ae75ffb2-1631-4a5d-af03-4421c2d000a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 15:44:14.608475 master-0 kubenswrapper[26425]: I0217 15:44:14.608307 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:14.608720 master-0 kubenswrapper[26425]: E0217 15:44:14.608550 26425 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 15:44:14.608720 master-0 kubenswrapper[26425]: E0217 15:44:14.608650 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:30.608622508 +0000 UTC m=+1732.500346326 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "metrics-server-cert" not found Feb 17 15:44:14.609128 master-0 kubenswrapper[26425]: I0217 15:44:14.609082 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:14.609473 master-0 kubenswrapper[26425]: E0217 15:44:14.609427 26425 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 15:44:14.609530 master-0 kubenswrapper[26425]: E0217 15:44:14.609488 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs podName:eb510143-0788-4676-91db-626e861a0b5c nodeName:}" failed. No retries permitted until 2026-02-17 15:44:30.609477639 +0000 UTC m=+1732.501201517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-98qgl" (UID: "eb510143-0788-4676-91db-626e861a0b5c") : secret "webhook-server-cert" not found Feb 17 15:44:18.208966 master-0 kubenswrapper[26425]: I0217 15:44:18.208451 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww"] Feb 17 15:44:18.303714 master-0 kubenswrapper[26425]: W0217 15:44:18.303639 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6435656_9d1f_4de0_bec9_62942d041759.slice/crio-96a36c2ab89dcb68888283f695b238e11d039048f282dde616fd35611adf9e70 WatchSource:0}: Error finding container 96a36c2ab89dcb68888283f695b238e11d039048f282dde616fd35611adf9e70: Status 404 returned error can't find the container with id 96a36c2ab89dcb68888283f695b238e11d039048f282dde616fd35611adf9e70 Feb 17 15:44:19.122278 master-0 kubenswrapper[26425]: I0217 15:44:19.122204 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:44:19.122637 master-0 kubenswrapper[26425]: E0217 15:44:19.122556 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:44:19.122637 master-0 kubenswrapper[26425]: E0217 15:44:19.122632 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:44:19.122754 master-0 kubenswrapper[26425]: E0217 15:44:19.122734 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:46:21.122686019 +0000 UTC m=+1843.014409847 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:44:19.259470 master-0 kubenswrapper[26425]: I0217 15:44:19.259383 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" event={"ID":"2148911f-33d3-45d6-9441-98e6c9c0b0dc","Type":"ContainerStarted","Data":"4f02a0135b79ea03eaae3ecac58189e169f471ae817e93e8ea9bc76858cb8e74"} Feb 17 15:44:19.260634 master-0 kubenswrapper[26425]: I0217 15:44:19.260599 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:44:19.261971 master-0 kubenswrapper[26425]: I0217 15:44:19.261885 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" event={"ID":"d6435656-9d1f-4de0-bec9-62942d041759","Type":"ContainerStarted","Data":"96a36c2ab89dcb68888283f695b238e11d039048f282dde616fd35611adf9e70"} Feb 17 15:44:19.266040 master-0 kubenswrapper[26425]: I0217 15:44:19.265980 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" event={"ID":"9dda403e-9fcc-4390-9d5f-ffc7c7c6b439","Type":"ContainerStarted","Data":"2eb1428335fe8bb766ec8ecc8bd9754b85c075ca300b7e658096aace6467de5c"} Feb 17 15:44:19.266143 master-0 kubenswrapper[26425]: I0217 15:44:19.266071 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:44:19.268375 master-0 kubenswrapper[26425]: I0217 15:44:19.268308 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" event={"ID":"9ca3a1eb-b468-4097-b2f7-08d3564d20cc","Type":"ContainerStarted","Data":"7ef7a859ecd3e871cbf2bea7ab1d04e88c879fdbecbe52ed7f5c4342af59df1d"} Feb 17 15:44:19.268492 master-0 kubenswrapper[26425]: I0217 15:44:19.268434 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:44:19.274229 master-0 kubenswrapper[26425]: I0217 15:44:19.274155 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" event={"ID":"320b61c5-b8bb-4c7c-a14d-77143d9523e6","Type":"ContainerStarted","Data":"4bdac8a67b9d5d728e2995944fd05a94ab10571cba9d3dd4459b0b629305a8ff"} Feb 17 15:44:19.274788 master-0 kubenswrapper[26425]: I0217 15:44:19.274257 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:44:19.276661 master-0 kubenswrapper[26425]: I0217 15:44:19.276376 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" event={"ID":"24902857-8de0-4a77-b2e0-e0d12b8b2f34","Type":"ContainerStarted","Data":"28be5b8767b90251d2392c3b4d8e8d0a3fa7f8e28e88c5b542fc14d9da53344b"} Feb 17 15:44:19.276746 master-0 kubenswrapper[26425]: I0217 15:44:19.276692 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:44:19.283956 master-0 kubenswrapper[26425]: I0217 15:44:19.283862 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" podStartSLOduration=4.803910835 podStartE2EDuration="22.2838401s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.630165856 +0000 UTC m=+1701.521889684" lastFinishedPulling="2026-02-17 15:44:17.110095131 +0000 UTC m=+1719.001818949" observedRunningTime="2026-02-17 15:44:19.281617927 +0000 UTC m=+1721.173341765" watchObservedRunningTime="2026-02-17 15:44:19.2838401 +0000 UTC m=+1721.175563928" Feb 17 15:44:19.288420 master-0 kubenswrapper[26425]: I0217 15:44:19.287454 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" event={"ID":"ad03d25f-0d11-45ee-83ce-179f47fcd066","Type":"ContainerStarted","Data":"70728ad6251e663a8a90013e7f54d3f9d3542ac7f556e14752dc4e9dd4165326"} Feb 17 15:44:19.288420 master-0 kubenswrapper[26425]: I0217 15:44:19.288361 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:44:19.294883 master-0 kubenswrapper[26425]: I0217 15:44:19.294825 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" event={"ID":"f46c3852-bd3e-454b-a65c-d1a206a51ed8","Type":"ContainerStarted","Data":"2d27d985f11853f15cfbdbf2ef129f4b344fc9cf0af8ee398dc6ee2209076975"} Feb 17 15:44:19.298610 master-0 kubenswrapper[26425]: I0217 15:44:19.298060 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:44:19.307407 master-0 kubenswrapper[26425]: I0217 15:44:19.307100 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" event={"ID":"5def741b-238b-47f4-a3cf-9dfd57b8b5b9","Type":"ContainerStarted","Data":"ff00496cb0e2e7250a5bd924d1b8718a045aedf1ab593814cda08061b05bc47f"} Feb 17 15:44:19.308046 master-0 kubenswrapper[26425]: I0217 15:44:19.308011 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:44:19.311385 master-0 kubenswrapper[26425]: I0217 15:44:19.311285 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" podStartSLOduration=4.221703396 podStartE2EDuration="22.311260341s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.019662974 +0000 UTC m=+1700.911386792" lastFinishedPulling="2026-02-17 15:44:17.109219919 +0000 UTC m=+1719.000943737" observedRunningTime="2026-02-17 15:44:19.298424932 +0000 UTC m=+1721.190148760" watchObservedRunningTime="2026-02-17 15:44:19.311260341 +0000 UTC m=+1721.202984159" Feb 17 15:44:19.322364 master-0 kubenswrapper[26425]: I0217 15:44:19.322268 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" event={"ID":"a43562a4-5283-4089-94cc-af78066de5d9","Type":"ContainerStarted","Data":"b13f8592aa146369583b01eab72e856b663c4b545cfd579ac02844cd2cda4286"} Feb 17 15:44:19.322839 master-0 kubenswrapper[26425]: I0217 15:44:19.322748 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:44:19.332049 master-0 kubenswrapper[26425]: I0217 15:44:19.331886 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" event={"ID":"f525513f-ac8e-4d1a-b2ba-24217e0e642f","Type":"ContainerStarted","Data":"406be949d880e67f4d5bfae763828288af302b89cc508af6015c7c375d3fea40"} Feb 17 15:44:19.333350 master-0 kubenswrapper[26425]: I0217 15:44:19.333297 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:44:19.348605 master-0 kubenswrapper[26425]: I0217 15:44:19.345516 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" podStartSLOduration=7.459253354 podStartE2EDuration="22.345489145s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:58.976290279 +0000 UTC m=+1700.868014097" lastFinishedPulling="2026-02-17 15:44:13.86252603 +0000 UTC m=+1715.754249888" observedRunningTime="2026-02-17 15:44:19.333684071 +0000 UTC m=+1721.225407899" watchObservedRunningTime="2026-02-17 15:44:19.345489145 +0000 UTC m=+1721.237212973" Feb 17 15:44:19.387496 master-0 kubenswrapper[26425]: I0217 15:44:19.386842 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" podStartSLOduration=5.465943109 podStartE2EDuration="22.38682397s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.623882335 +0000 UTC m=+1701.515606193" lastFinishedPulling="2026-02-17 15:44:16.544763236 +0000 UTC m=+1718.436487054" observedRunningTime="2026-02-17 15:44:19.379512095 +0000 UTC m=+1721.271235933" watchObservedRunningTime="2026-02-17 15:44:19.38682397 +0000 UTC m=+1721.278547778" Feb 17 15:44:19.417981 master-0 kubenswrapper[26425]: I0217 15:44:19.416999 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" podStartSLOduration=4.81935234 podStartE2EDuration="21.416955386s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512008734 +0000 UTC m=+1702.403732552" lastFinishedPulling="2026-02-17 15:44:17.10961178 +0000 UTC m=+1719.001335598" observedRunningTime="2026-02-17 15:44:19.405005968 +0000 UTC m=+1721.296729806" watchObservedRunningTime="2026-02-17 15:44:19.416955386 +0000 UTC m=+1721.308679214" Feb 17 15:44:19.453539 master-0 kubenswrapper[26425]: I0217 15:44:19.449957 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" podStartSLOduration=4.852447208 podStartE2EDuration="21.44993365s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512377193 +0000 UTC m=+1702.404101011" lastFinishedPulling="2026-02-17 15:44:17.109863635 +0000 UTC m=+1719.001587453" observedRunningTime="2026-02-17 15:44:19.445148085 +0000 UTC m=+1721.336871913" watchObservedRunningTime="2026-02-17 15:44:19.44993365 +0000 UTC m=+1721.341657468" Feb 17 15:44:19.528570 master-0 kubenswrapper[26425]: I0217 15:44:19.525089 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" podStartSLOduration=5.045370351 podStartE2EDuration="22.52506336s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.631385216 +0000 UTC m=+1701.523109044" lastFinishedPulling="2026-02-17 15:44:17.111078245 +0000 UTC m=+1719.002802053" observedRunningTime="2026-02-17 15:44:19.498169191 +0000 UTC m=+1721.389893029" watchObservedRunningTime="2026-02-17 15:44:19.52506336 +0000 UTC m=+1721.416787188" Feb 17 15:44:19.568995 master-0 kubenswrapper[26425]: I0217 15:44:19.563514 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" podStartSLOduration=7.206391055 podStartE2EDuration="22.563484815s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.636201722 +0000 UTC m=+1701.527925550" lastFinishedPulling="2026-02-17 15:44:14.993295482 +0000 UTC m=+1716.885019310" observedRunningTime="2026-02-17 15:44:19.528447461 +0000 UTC m=+1721.420171299" watchObservedRunningTime="2026-02-17 15:44:19.563484815 +0000 UTC m=+1721.455208633" Feb 17 15:44:19.643543 master-0 kubenswrapper[26425]: I0217 15:44:19.635923 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" podStartSLOduration=5.141562598 podStartE2EDuration="22.635902569s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.61328015 +0000 UTC m=+1701.505003978" lastFinishedPulling="2026-02-17 15:44:17.107620131 +0000 UTC m=+1718.999343949" observedRunningTime="2026-02-17 15:44:19.553060334 +0000 UTC m=+1721.444784162" watchObservedRunningTime="2026-02-17 15:44:19.635902569 +0000 UTC m=+1721.527626387" Feb 17 15:44:19.683399 master-0 kubenswrapper[26425]: I0217 15:44:19.678914 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" podStartSLOduration=5.77817526 podStartE2EDuration="22.678889295s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.631509819 +0000 UTC m=+1701.523233657" lastFinishedPulling="2026-02-17 15:44:16.532223874 +0000 UTC m=+1718.423947692" observedRunningTime="2026-02-17 15:44:19.580705409 +0000 UTC m=+1721.472429237" watchObservedRunningTime="2026-02-17 15:44:19.678889295 +0000 UTC m=+1721.570613123" Feb 17 15:44:20.347542 master-0 kubenswrapper[26425]: I0217 15:44:20.347003 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" event={"ID":"5865c6d5-ba38-4d97-9f7a-a9fc3d130b19","Type":"ContainerStarted","Data":"6624ed330e8d90498bab1773871b53cf716d561d0a34b737cb508b3c442dd669"} Feb 17 15:44:20.348285 master-0 kubenswrapper[26425]: I0217 15:44:20.347575 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:44:20.355625 master-0 kubenswrapper[26425]: I0217 15:44:20.352101 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" event={"ID":"705adb1b-fbc0-40c4-a0e2-6bbe555516f5","Type":"ContainerStarted","Data":"636bf72dd35b47b952f9fe4c7ad7074ab56850350981cdd54bc7c88a8ab8613b"} Feb 17 15:44:20.355625 master-0 kubenswrapper[26425]: I0217 15:44:20.353148 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:44:20.362493 master-0 kubenswrapper[26425]: I0217 15:44:20.360433 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" event={"ID":"6148874d-2cc5-40f6-9adf-857f5c5a654c","Type":"ContainerStarted","Data":"9dfc5a4f93ac68e4f8e871433c1f1c19a3d38b052676b01e3813e669d875fbf1"} Feb 17 15:44:20.362493 master-0 kubenswrapper[26425]: I0217 15:44:20.361498 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:44:20.368495 master-0 kubenswrapper[26425]: I0217 15:44:20.367102 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" event={"ID":"4acfd1ef-33b8-4cef-a320-0813274a3d34","Type":"ContainerStarted","Data":"0516561dbf317fc685cb9192b14b172f1518b7b4722094a880a60a4749f4927f"} Feb 17 15:44:20.368495 master-0 kubenswrapper[26425]: I0217 15:44:20.367763 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:44:20.379500 master-0 kubenswrapper[26425]: I0217 15:44:20.374344 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" event={"ID":"f2231eca-08d0-4ab0-8b61-e2f73aca05f5","Type":"ContainerStarted","Data":"210aad434cbc491addc6d4898f0f5096c90a483497b4d8a06279b8365bea991b"} Feb 17 15:44:20.379500 master-0 kubenswrapper[26425]: I0217 15:44:20.374497 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:44:20.379500 master-0 kubenswrapper[26425]: I0217 15:44:20.379495 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" event={"ID":"aa6a2998-eacc-4bc5-b73c-677087888726","Type":"ContainerStarted","Data":"0ac2f71355988270d1ac31f96a03b453d242683834d6e755cd6edce5492df9b4"} Feb 17 15:44:20.383500 master-0 kubenswrapper[26425]: I0217 15:44:20.381833 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" event={"ID":"eb3829e1-cdae-40b9-8cdc-c4a17142b5fb","Type":"ContainerStarted","Data":"16694124e0827b0e0fe4919a72d71a48c87d1f83a9c7470bf8905f505eca3843"} Feb 17 15:44:20.383500 master-0 kubenswrapper[26425]: I0217 15:44:20.382017 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:44:20.388712 master-0 kubenswrapper[26425]: I0217 15:44:20.383891 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" event={"ID":"231830cb-0e67-4056-bafd-2b5357344fac","Type":"ContainerStarted","Data":"99adfe32cea781449d1fb4bdaad38199bf002b4774c49ba729f49758ec96f43d"} Feb 17 15:44:20.388712 master-0 kubenswrapper[26425]: I0217 15:44:20.384628 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:44:20.388712 master-0 kubenswrapper[26425]: I0217 15:44:20.386788 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" event={"ID":"fcd7732c-b9a6-48a5-bd36-8b51a9da2789","Type":"ContainerStarted","Data":"7d64682c2735dd8893afb37d224343bd7cf2d2b63eef7380fc537b02e2773d91"} Feb 17 15:44:20.388712 master-0 kubenswrapper[26425]: I0217 15:44:20.386863 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:44:20.396498 master-0 kubenswrapper[26425]: I0217 15:44:20.393907 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" podStartSLOduration=5.380431351 podStartE2EDuration="23.393884303s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512551287 +0000 UTC m=+1702.404275105" lastFinishedPulling="2026-02-17 15:44:18.526004239 +0000 UTC m=+1720.417728057" observedRunningTime="2026-02-17 15:44:20.39209016 +0000 UTC m=+1722.283813988" watchObservedRunningTime="2026-02-17 15:44:20.393884303 +0000 UTC m=+1722.285608121" Feb 17 15:44:20.427501 master-0 kubenswrapper[26425]: I0217 15:44:20.427320 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" event={"ID":"35f3e5eb-c70c-44c3-9a43-19202ba6c631","Type":"ContainerStarted","Data":"484ab67a0f55523f1b83613ed67fe47247d99d42917fbdd834181739a25e203c"} Feb 17 15:44:20.427501 master-0 kubenswrapper[26425]: I0217 15:44:20.427382 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:44:20.477689 master-0 kubenswrapper[26425]: I0217 15:44:20.473610 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" podStartSLOduration=5.874329587 podStartE2EDuration="22.473585522s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512234979 +0000 UTC m=+1702.403958797" lastFinishedPulling="2026-02-17 15:44:17.111490904 +0000 UTC m=+1719.003214732" observedRunningTime="2026-02-17 15:44:20.424997272 +0000 UTC m=+1722.316721090" watchObservedRunningTime="2026-02-17 15:44:20.473585522 +0000 UTC m=+1722.365309340" Feb 17 15:44:20.477689 master-0 kubenswrapper[26425]: I0217 15:44:20.476191 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" podStartSLOduration=6.8777015 podStartE2EDuration="23.476160305s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512312182 +0000 UTC m=+1702.404036000" lastFinishedPulling="2026-02-17 15:44:17.110770987 +0000 UTC m=+1719.002494805" observedRunningTime="2026-02-17 15:44:20.461148253 +0000 UTC m=+1722.352872071" watchObservedRunningTime="2026-02-17 15:44:20.476160305 +0000 UTC m=+1722.367884133" Feb 17 15:44:20.528493 master-0 kubenswrapper[26425]: I0217 15:44:20.524285 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" podStartSLOduration=6.352663675 podStartE2EDuration="23.524262583s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.938547134 +0000 UTC m=+1701.830270952" lastFinishedPulling="2026-02-17 15:44:17.110146022 +0000 UTC m=+1719.001869860" observedRunningTime="2026-02-17 15:44:20.497062128 +0000 UTC m=+1722.388785966" watchObservedRunningTime="2026-02-17 15:44:20.524262583 +0000 UTC m=+1722.415986401" Feb 17 15:44:20.534041 master-0 kubenswrapper[26425]: I0217 15:44:20.528754 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" podStartSLOduration=5.399423668 podStartE2EDuration="23.528739191s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:58.981418753 +0000 UTC m=+1700.873142571" lastFinishedPulling="2026-02-17 15:44:17.110734286 +0000 UTC m=+1719.002458094" observedRunningTime="2026-02-17 15:44:20.525886782 +0000 UTC m=+1722.417610610" watchObservedRunningTime="2026-02-17 15:44:20.528739191 +0000 UTC m=+1722.420463009" Feb 17 15:44:20.576476 master-0 kubenswrapper[26425]: I0217 15:44:20.576393 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" podStartSLOduration=5.65114886 podStartE2EDuration="23.576374888s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.54507502 +0000 UTC m=+1702.436798838" lastFinishedPulling="2026-02-17 15:44:18.470301038 +0000 UTC m=+1720.362024866" observedRunningTime="2026-02-17 15:44:20.555477515 +0000 UTC m=+1722.447201343" watchObservedRunningTime="2026-02-17 15:44:20.576374888 +0000 UTC m=+1722.468098706" Feb 17 15:44:20.618503 master-0 kubenswrapper[26425]: I0217 15:44:20.616768 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" podStartSLOduration=7.500789464 podStartE2EDuration="23.61674474s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.953674438 +0000 UTC m=+1701.845398256" lastFinishedPulling="2026-02-17 15:44:16.069629714 +0000 UTC m=+1717.961353532" observedRunningTime="2026-02-17 15:44:20.578824957 +0000 UTC m=+1722.470548805" watchObservedRunningTime="2026-02-17 15:44:20.61674474 +0000 UTC m=+1722.508468558" Feb 17 15:44:20.635634 master-0 kubenswrapper[26425]: I0217 15:44:20.634791 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5" podStartSLOduration=4.751024926 podStartE2EDuration="22.634772224s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.512480666 +0000 UTC m=+1702.404204484" lastFinishedPulling="2026-02-17 15:44:18.396227924 +0000 UTC m=+1720.287951782" observedRunningTime="2026-02-17 15:44:20.606836531 +0000 UTC m=+1722.498560359" watchObservedRunningTime="2026-02-17 15:44:20.634772224 +0000 UTC m=+1722.526496042" Feb 17 15:44:20.652212 master-0 kubenswrapper[26425]: I0217 15:44:20.652135 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" podStartSLOduration=4.687375373 podStartE2EDuration="22.652114292s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="2026-02-17 15:44:00.513779967 +0000 UTC m=+1702.405503785" lastFinishedPulling="2026-02-17 15:44:18.478518886 +0000 UTC m=+1720.370242704" observedRunningTime="2026-02-17 15:44:20.636823614 +0000 UTC m=+1722.528547442" watchObservedRunningTime="2026-02-17 15:44:20.652114292 +0000 UTC m=+1722.543838110" Feb 17 15:44:20.677505 master-0 kubenswrapper[26425]: I0217 15:44:20.675457 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" podStartSLOduration=6.50115438 podStartE2EDuration="23.675434823s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:43:59.937018317 +0000 UTC m=+1701.828742135" lastFinishedPulling="2026-02-17 15:44:17.11129876 +0000 UTC m=+1719.003022578" observedRunningTime="2026-02-17 15:44:20.668002605 +0000 UTC m=+1722.559726433" watchObservedRunningTime="2026-02-17 15:44:20.675434823 +0000 UTC m=+1722.567158641" Feb 17 15:44:23.445703 master-0 kubenswrapper[26425]: I0217 15:44:23.445646 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" event={"ID":"d6435656-9d1f-4de0-bec9-62942d041759","Type":"ContainerStarted","Data":"532bf6dffcf1a43e16bebd7492fbbaaf927c3ae4aa84ba498ad0c0794591c630"} Feb 17 15:44:23.447250 master-0 kubenswrapper[26425]: I0217 15:44:23.447223 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:23.468506 master-0 kubenswrapper[26425]: I0217 15:44:23.466766 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" podStartSLOduration=22.459266214 podStartE2EDuration="26.466747986s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:44:18.386325335 +0000 UTC m=+1720.278049153" lastFinishedPulling="2026-02-17 15:44:22.393807107 +0000 UTC m=+1724.285530925" observedRunningTime="2026-02-17 15:44:23.461148981 +0000 UTC m=+1725.352872829" watchObservedRunningTime="2026-02-17 15:44:23.466747986 +0000 UTC m=+1725.358471804" Feb 17 15:44:27.937655 master-0 kubenswrapper[26425]: I0217 15:44:27.937601 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8" Feb 17 15:44:27.952604 master-0 kubenswrapper[26425]: I0217 15:44:27.952519 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd" Feb 17 15:44:28.014614 master-0 kubenswrapper[26425]: I0217 15:44:28.014546 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f" Feb 17 15:44:28.053385 master-0 kubenswrapper[26425]: I0217 15:44:28.053331 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn" Feb 17 15:44:28.127064 master-0 kubenswrapper[26425]: I0217 15:44:28.127013 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp" Feb 17 15:44:28.146810 master-0 kubenswrapper[26425]: I0217 15:44:28.146757 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p" Feb 17 15:44:28.226471 master-0 kubenswrapper[26425]: I0217 15:44:28.226326 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9" Feb 17 15:44:28.282101 master-0 kubenswrapper[26425]: I0217 15:44:28.282042 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn" Feb 17 15:44:28.362312 master-0 kubenswrapper[26425]: I0217 15:44:28.362245 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9" Feb 17 15:44:28.380808 master-0 kubenswrapper[26425]: I0217 15:44:28.380718 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr" Feb 17 15:44:28.388040 master-0 kubenswrapper[26425]: I0217 15:44:28.387966 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2td54" Feb 17 15:44:28.418718 master-0 kubenswrapper[26425]: I0217 15:44:28.418659 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6" Feb 17 15:44:28.737986 master-0 kubenswrapper[26425]: I0217 15:44:28.737855 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67" Feb 17 15:44:28.795119 master-0 kubenswrapper[26425]: I0217 15:44:28.795040 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x" Feb 17 15:44:28.804743 master-0 kubenswrapper[26425]: I0217 15:44:28.804668 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg" Feb 17 15:44:28.824057 master-0 kubenswrapper[26425]: I0217 15:44:28.823969 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b" Feb 17 15:44:28.854964 master-0 kubenswrapper[26425]: I0217 15:44:28.854896 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zdksg" Feb 17 15:44:28.902261 master-0 kubenswrapper[26425]: I0217 15:44:28.902189 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2vx66" Feb 17 15:44:28.922910 master-0 kubenswrapper[26425]: I0217 15:44:28.922862 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27" Feb 17 15:44:30.374883 master-0 kubenswrapper[26425]: I0217 15:44:30.374811 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:30.380248 master-0 kubenswrapper[26425]: I0217 15:44:30.380194 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae75ffb2-1631-4a5d-af03-4421c2d000a1-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn\" (UID: \"ae75ffb2-1631-4a5d-af03-4421c2d000a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:30.558616 master-0 kubenswrapper[26425]: I0217 15:44:30.558536 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:30.700192 master-0 kubenswrapper[26425]: I0217 15:44:30.700120 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:30.700402 master-0 kubenswrapper[26425]: I0217 15:44:30.700302 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:30.703564 master-0 kubenswrapper[26425]: I0217 15:44:30.703521 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:30.705051 master-0 kubenswrapper[26425]: I0217 15:44:30.704937 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eb510143-0788-4676-91db-626e861a0b5c-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-98qgl\" (UID: \"eb510143-0788-4676-91db-626e861a0b5c\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:30.796667 master-0 kubenswrapper[26425]: I0217 15:44:30.796194 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:31.072637 master-0 kubenswrapper[26425]: I0217 15:44:31.072562 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn"] Feb 17 15:44:31.079687 master-0 kubenswrapper[26425]: W0217 15:44:31.079621 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae75ffb2_1631_4a5d_af03_4421c2d000a1.slice/crio-0a94f12ad548bafa20646d5fbad47f195943e4b792c4de8f9364d9026a026632 WatchSource:0}: Error finding container 0a94f12ad548bafa20646d5fbad47f195943e4b792c4de8f9364d9026a026632: Status 404 returned error can't find the container with id 0a94f12ad548bafa20646d5fbad47f195943e4b792c4de8f9364d9026a026632 Feb 17 15:44:31.271727 master-0 kubenswrapper[26425]: I0217 15:44:31.271637 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl"] Feb 17 15:44:31.273052 master-0 kubenswrapper[26425]: W0217 15:44:31.272995 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb510143_0788_4676_91db_626e861a0b5c.slice/crio-975baea21df2713409391416c60a308cb38ec054f432eb2cb22d6b1cee77a808 WatchSource:0}: Error finding container 975baea21df2713409391416c60a308cb38ec054f432eb2cb22d6b1cee77a808: Status 404 returned error can't find the container with id 975baea21df2713409391416c60a308cb38ec054f432eb2cb22d6b1cee77a808 Feb 17 15:44:31.550421 master-0 kubenswrapper[26425]: I0217 15:44:31.550274 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" event={"ID":"ae75ffb2-1631-4a5d-af03-4421c2d000a1","Type":"ContainerStarted","Data":"0a94f12ad548bafa20646d5fbad47f195943e4b792c4de8f9364d9026a026632"} Feb 17 15:44:31.552348 master-0 kubenswrapper[26425]: I0217 15:44:31.552305 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" event={"ID":"eb510143-0788-4676-91db-626e861a0b5c","Type":"ContainerStarted","Data":"70c7bac8bb3a06155a619fd2878ee62de772981215c1743dbc79ac798221e2df"} Feb 17 15:44:31.552348 master-0 kubenswrapper[26425]: I0217 15:44:31.552339 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" event={"ID":"eb510143-0788-4676-91db-626e861a0b5c","Type":"ContainerStarted","Data":"975baea21df2713409391416c60a308cb38ec054f432eb2cb22d6b1cee77a808"} Feb 17 15:44:31.553697 master-0 kubenswrapper[26425]: I0217 15:44:31.553655 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:44:31.588536 master-0 kubenswrapper[26425]: I0217 15:44:31.588424 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" podStartSLOduration=33.588397767000004 podStartE2EDuration="33.588397767s" podCreationTimestamp="2026-02-17 15:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:44:31.580077907 +0000 UTC m=+1733.471801755" watchObservedRunningTime="2026-02-17 15:44:31.588397767 +0000 UTC m=+1733.480121595" Feb 17 15:44:33.577993 master-0 kubenswrapper[26425]: I0217 15:44:33.577810 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" event={"ID":"ae75ffb2-1631-4a5d-af03-4421c2d000a1","Type":"ContainerStarted","Data":"19117c33d2be4a7fa1947f8961d552a7adfa3bed958d30d15c9c6b964bf6b2ac"} Feb 17 15:44:33.649706 master-0 kubenswrapper[26425]: I0217 15:44:33.649596 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" podStartSLOduration=34.60470849 podStartE2EDuration="36.649565036s" podCreationTimestamp="2026-02-17 15:43:57 +0000 UTC" firstStartedPulling="2026-02-17 15:44:31.083872817 +0000 UTC m=+1732.975596655" lastFinishedPulling="2026-02-17 15:44:33.128729383 +0000 UTC m=+1735.020453201" observedRunningTime="2026-02-17 15:44:33.631025129 +0000 UTC m=+1735.522748987" watchObservedRunningTime="2026-02-17 15:44:33.649565036 +0000 UTC m=+1735.541288894" Feb 17 15:44:33.943187 master-0 kubenswrapper[26425]: I0217 15:44:33.943078 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww" Feb 17 15:44:34.586532 master-0 kubenswrapper[26425]: I0217 15:44:34.586427 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:40.565983 master-0 kubenswrapper[26425]: I0217 15:44:40.565891 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn" Feb 17 15:44:40.809324 master-0 kubenswrapper[26425]: I0217 15:44:40.807875 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl" Feb 17 15:45:00.194215 master-0 kubenswrapper[26425]: I0217 15:45:00.194134 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt"] Feb 17 15:45:00.198521 master-0 kubenswrapper[26425]: I0217 15:45:00.195529 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.198521 master-0 kubenswrapper[26425]: I0217 15:45:00.197856 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-fqc4f" Feb 17 15:45:00.198521 master-0 kubenswrapper[26425]: I0217 15:45:00.198337 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:45:00.217342 master-0 kubenswrapper[26425]: I0217 15:45:00.217264 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88898\" (UniqueName: \"kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.217577 master-0 kubenswrapper[26425]: I0217 15:45:00.217530 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.217792 master-0 kubenswrapper[26425]: I0217 15:45:00.217658 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.231679 master-0 kubenswrapper[26425]: I0217 15:45:00.231612 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt"] Feb 17 15:45:00.319315 master-0 kubenswrapper[26425]: I0217 15:45:00.319241 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88898\" (UniqueName: \"kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.319531 master-0 kubenswrapper[26425]: I0217 15:45:00.319511 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.319632 master-0 kubenswrapper[26425]: I0217 15:45:00.319540 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.320708 master-0 kubenswrapper[26425]: I0217 15:45:00.320569 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.323329 master-0 kubenswrapper[26425]: I0217 15:45:00.323297 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.338277 master-0 kubenswrapper[26425]: I0217 15:45:00.338184 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88898\" (UniqueName: \"kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898\") pod \"collect-profiles-29522385-7rwjt\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:00.527078 master-0 kubenswrapper[26425]: I0217 15:45:00.526900 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:01.155380 master-0 kubenswrapper[26425]: I0217 15:45:01.155257 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt"] Feb 17 15:45:01.939515 master-0 kubenswrapper[26425]: I0217 15:45:01.939444 26425 generic.go:334] "Generic (PLEG): container finished" podID="5a33ad9c-d8db-4b65-9d49-819d66da70e6" containerID="231e5bbef5707abb0a1ee2c5584ae72d4f99e70cbee97620d0a34cb93f3d86ff" exitCode=0 Feb 17 15:45:01.939515 master-0 kubenswrapper[26425]: I0217 15:45:01.939509 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" event={"ID":"5a33ad9c-d8db-4b65-9d49-819d66da70e6","Type":"ContainerDied","Data":"231e5bbef5707abb0a1ee2c5584ae72d4f99e70cbee97620d0a34cb93f3d86ff"} Feb 17 15:45:01.940296 master-0 kubenswrapper[26425]: I0217 15:45:01.939534 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" event={"ID":"5a33ad9c-d8db-4b65-9d49-819d66da70e6","Type":"ContainerStarted","Data":"fcc30db5f3ab090b0847a0afc9bf98e27375cbe77364a7248f735f009bdaeddf"} Feb 17 15:45:03.332199 master-0 kubenswrapper[26425]: I0217 15:45:03.332133 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:03.424725 master-0 kubenswrapper[26425]: I0217 15:45:03.424656 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume\") pod \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " Feb 17 15:45:03.424725 master-0 kubenswrapper[26425]: I0217 15:45:03.424737 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume\") pod \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " Feb 17 15:45:03.425039 master-0 kubenswrapper[26425]: I0217 15:45:03.424939 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88898\" (UniqueName: \"kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898\") pod \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\" (UID: \"5a33ad9c-d8db-4b65-9d49-819d66da70e6\") " Feb 17 15:45:03.425283 master-0 kubenswrapper[26425]: I0217 15:45:03.425244 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a33ad9c-d8db-4b65-9d49-819d66da70e6" (UID: "5a33ad9c-d8db-4b65-9d49-819d66da70e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:45:03.425819 master-0 kubenswrapper[26425]: I0217 15:45:03.425787 26425 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a33ad9c-d8db-4b65-9d49-819d66da70e6-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:03.427875 master-0 kubenswrapper[26425]: I0217 15:45:03.427824 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a33ad9c-d8db-4b65-9d49-819d66da70e6" (UID: "5a33ad9c-d8db-4b65-9d49-819d66da70e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:45:03.428841 master-0 kubenswrapper[26425]: I0217 15:45:03.428807 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898" (OuterVolumeSpecName: "kube-api-access-88898") pod "5a33ad9c-d8db-4b65-9d49-819d66da70e6" (UID: "5a33ad9c-d8db-4b65-9d49-819d66da70e6"). InnerVolumeSpecName "kube-api-access-88898". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:45:03.528058 master-0 kubenswrapper[26425]: I0217 15:45:03.528000 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88898\" (UniqueName: \"kubernetes.io/projected/5a33ad9c-d8db-4b65-9d49-819d66da70e6-kube-api-access-88898\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:03.528058 master-0 kubenswrapper[26425]: I0217 15:45:03.528044 26425 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a33ad9c-d8db-4b65-9d49-819d66da70e6-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:03.964755 master-0 kubenswrapper[26425]: I0217 15:45:03.964643 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" event={"ID":"5a33ad9c-d8db-4b65-9d49-819d66da70e6","Type":"ContainerDied","Data":"fcc30db5f3ab090b0847a0afc9bf98e27375cbe77364a7248f735f009bdaeddf"} Feb 17 15:45:03.964755 master-0 kubenswrapper[26425]: I0217 15:45:03.964694 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc30db5f3ab090b0847a0afc9bf98e27375cbe77364a7248f735f009bdaeddf" Feb 17 15:45:03.965388 master-0 kubenswrapper[26425]: I0217 15:45:03.964767 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt" Feb 17 15:45:04.801717 master-0 kubenswrapper[26425]: I0217 15:45:04.801575 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h"] Feb 17 15:45:04.812364 master-0 kubenswrapper[26425]: I0217 15:45:04.812170 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h"] Feb 17 15:45:06.419683 master-0 kubenswrapper[26425]: I0217 15:45:06.419613 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a162205-f111-49b4-9f46-0b40b6184336" path="/var/lib/kubelet/pods/2a162205-f111-49b4-9f46-0b40b6184336/volumes" Feb 17 15:45:20.282474 master-0 kubenswrapper[26425]: I0217 15:45:20.282186 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:20.283066 master-0 kubenswrapper[26425]: E0217 15:45:20.282691 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a33ad9c-d8db-4b65-9d49-819d66da70e6" containerName="collect-profiles" Feb 17 15:45:20.283066 master-0 kubenswrapper[26425]: I0217 15:45:20.282707 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a33ad9c-d8db-4b65-9d49-819d66da70e6" containerName="collect-profiles" Feb 17 15:45:20.283066 master-0 kubenswrapper[26425]: I0217 15:45:20.282882 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a33ad9c-d8db-4b65-9d49-819d66da70e6" containerName="collect-profiles" Feb 17 15:45:20.290777 master-0 kubenswrapper[26425]: I0217 15:45:20.284606 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.291569 master-0 kubenswrapper[26425]: I0217 15:45:20.291440 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 15:45:20.295473 master-0 kubenswrapper[26425]: I0217 15:45:20.291447 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 15:45:20.295473 master-0 kubenswrapper[26425]: I0217 15:45:20.293617 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.295473 master-0 kubenswrapper[26425]: I0217 15:45:20.293690 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zc52\" (UniqueName: \"kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.302139 master-0 kubenswrapper[26425]: I0217 15:45:20.300710 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 15:45:20.338543 master-0 kubenswrapper[26425]: I0217 15:45:20.334374 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:20.388619 master-0 kubenswrapper[26425]: I0217 15:45:20.388552 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:20.390335 master-0 kubenswrapper[26425]: I0217 15:45:20.390306 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.392636 master-0 kubenswrapper[26425]: I0217 15:45:20.392592 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 15:45:20.404692 master-0 kubenswrapper[26425]: I0217 15:45:20.404439 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.404874 master-0 kubenswrapper[26425]: I0217 15:45:20.404700 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zc52\" (UniqueName: \"kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.404874 master-0 kubenswrapper[26425]: I0217 15:45:20.404796 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.404874 master-0 kubenswrapper[26425]: I0217 15:45:20.404820 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjkj\" (UniqueName: \"kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.405047 master-0 kubenswrapper[26425]: I0217 15:45:20.404899 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.405799 master-0 kubenswrapper[26425]: I0217 15:45:20.405771 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.424558 master-0 kubenswrapper[26425]: I0217 15:45:20.424147 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:20.434349 master-0 kubenswrapper[26425]: I0217 15:45:20.434303 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zc52\" (UniqueName: \"kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52\") pod \"dnsmasq-dns-5c7b6fb887-tpv9d\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.505859 master-0 kubenswrapper[26425]: I0217 15:45:20.505780 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.505859 master-0 kubenswrapper[26425]: I0217 15:45:20.505852 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjkj\" (UniqueName: \"kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.505859 master-0 kubenswrapper[26425]: I0217 15:45:20.505871 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.507491 master-0 kubenswrapper[26425]: I0217 15:45:20.507435 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.508129 master-0 kubenswrapper[26425]: I0217 15:45:20.508072 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.528357 master-0 kubenswrapper[26425]: I0217 15:45:20.528295 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjkj\" (UniqueName: \"kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj\") pod \"dnsmasq-dns-7d78499c-p9rp4\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:20.626150 master-0 kubenswrapper[26425]: I0217 15:45:20.626012 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:20.714649 master-0 kubenswrapper[26425]: I0217 15:45:20.712882 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:21.066545 master-0 kubenswrapper[26425]: I0217 15:45:21.066487 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:21.074105 master-0 kubenswrapper[26425]: W0217 15:45:21.074034 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1304b6d0_6de1_4f39_a55d_bf89c4b41d08.slice/crio-c9e084988983ff3787b30566e9c15707f3d640ffde96662c74e2e45cedeb67f1 WatchSource:0}: Error finding container c9e084988983ff3787b30566e9c15707f3d640ffde96662c74e2e45cedeb67f1: Status 404 returned error can't find the container with id c9e084988983ff3787b30566e9c15707f3d640ffde96662c74e2e45cedeb67f1 Feb 17 15:45:21.174561 master-0 kubenswrapper[26425]: I0217 15:45:21.173666 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" event={"ID":"1304b6d0-6de1-4f39-a55d-bf89c4b41d08","Type":"ContainerStarted","Data":"c9e084988983ff3787b30566e9c15707f3d640ffde96662c74e2e45cedeb67f1"} Feb 17 15:45:21.243615 master-0 kubenswrapper[26425]: I0217 15:45:21.243557 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:21.245307 master-0 kubenswrapper[26425]: W0217 15:45:21.245257 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb161df9_6094_4ed3_8a36_06a828cd1674.slice/crio-3658b8f77ad875aa95dcfa7ef732b93073a01cb0c003f02a862d0fa6f5e17832 WatchSource:0}: Error finding container 3658b8f77ad875aa95dcfa7ef732b93073a01cb0c003f02a862d0fa6f5e17832: Status 404 returned error can't find the container with id 3658b8f77ad875aa95dcfa7ef732b93073a01cb0c003f02a862d0fa6f5e17832 Feb 17 15:45:21.569627 master-0 kubenswrapper[26425]: I0217 15:45:21.569426 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:21.601477 master-0 kubenswrapper[26425]: I0217 15:45:21.601396 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:45:21.605360 master-0 kubenswrapper[26425]: I0217 15:45:21.605312 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.614121 master-0 kubenswrapper[26425]: I0217 15:45:21.614065 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:45:21.663774 master-0 kubenswrapper[26425]: I0217 15:45:21.662698 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.663774 master-0 kubenswrapper[26425]: I0217 15:45:21.662812 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.663774 master-0 kubenswrapper[26425]: I0217 15:45:21.662869 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qhm\" (UniqueName: \"kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.768515 master-0 kubenswrapper[26425]: I0217 15:45:21.764936 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qhm\" (UniqueName: \"kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.768515 master-0 kubenswrapper[26425]: I0217 15:45:21.765046 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.768515 master-0 kubenswrapper[26425]: I0217 15:45:21.765131 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.768515 master-0 kubenswrapper[26425]: I0217 15:45:21.766498 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.768515 master-0 kubenswrapper[26425]: I0217 15:45:21.768112 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.796398 master-0 kubenswrapper[26425]: I0217 15:45:21.796311 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qhm\" (UniqueName: \"kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm\") pod \"dnsmasq-dns-75b66f9649-znfnp\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:21.946953 master-0 kubenswrapper[26425]: I0217 15:45:21.946268 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:45:22.192516 master-0 kubenswrapper[26425]: I0217 15:45:22.192372 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" event={"ID":"cb161df9-6094-4ed3-8a36-06a828cd1674","Type":"ContainerStarted","Data":"3658b8f77ad875aa95dcfa7ef732b93073a01cb0c003f02a862d0fa6f5e17832"} Feb 17 15:45:22.475272 master-0 kubenswrapper[26425]: I0217 15:45:22.474682 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:45:22.498380 master-0 kubenswrapper[26425]: W0217 15:45:22.497717 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7825929_3b0c_402f_9c91_3f6a0e438ea3.slice/crio-832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e WatchSource:0}: Error finding container 832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e: Status 404 returned error can't find the container with id 832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e Feb 17 15:45:22.506497 master-0 kubenswrapper[26425]: I0217 15:45:22.505932 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:22.516335 master-0 kubenswrapper[26425]: I0217 15:45:22.516261 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:45:22.518380 master-0 kubenswrapper[26425]: I0217 15:45:22.518354 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.527478 master-0 kubenswrapper[26425]: I0217 15:45:22.524934 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:45:22.593525 master-0 kubenswrapper[26425]: I0217 15:45:22.589936 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpkv\" (UniqueName: \"kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.593525 master-0 kubenswrapper[26425]: I0217 15:45:22.590198 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.593525 master-0 kubenswrapper[26425]: I0217 15:45:22.590410 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.697609 master-0 kubenswrapper[26425]: I0217 15:45:22.692595 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpkv\" (UniqueName: \"kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.697609 master-0 kubenswrapper[26425]: I0217 15:45:22.692723 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.697609 master-0 kubenswrapper[26425]: I0217 15:45:22.692800 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.697609 master-0 kubenswrapper[26425]: I0217 15:45:22.694271 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.697609 master-0 kubenswrapper[26425]: I0217 15:45:22.694393 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.726608 master-0 kubenswrapper[26425]: I0217 15:45:22.726496 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpkv\" (UniqueName: \"kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv\") pod \"dnsmasq-dns-6b98d7b55c-hdh27\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:22.848475 master-0 kubenswrapper[26425]: I0217 15:45:22.847888 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:23.233436 master-0 kubenswrapper[26425]: I0217 15:45:23.232764 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" event={"ID":"f7825929-3b0c-402f-9c91-3f6a0e438ea3","Type":"ContainerStarted","Data":"832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e"} Feb 17 15:45:23.502391 master-0 kubenswrapper[26425]: I0217 15:45:23.502326 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:45:24.266647 master-0 kubenswrapper[26425]: I0217 15:45:24.266375 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" event={"ID":"a2122296-6151-4ec0-b71c-fd6ad516ffb4","Type":"ContainerStarted","Data":"790787ab3234f90798b9baceb04f169d2319af35ae6d632582202e48dc4b42d1"} Feb 17 15:45:25.779601 master-0 kubenswrapper[26425]: I0217 15:45:25.773723 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 15:45:25.779601 master-0 kubenswrapper[26425]: I0217 15:45:25.775222 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.782654 master-0 kubenswrapper[26425]: I0217 15:45:25.782582 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 15:45:25.782843 master-0 kubenswrapper[26425]: I0217 15:45:25.782805 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 15:45:25.783022 master-0 kubenswrapper[26425]: I0217 15:45:25.782996 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 15:45:25.783127 master-0 kubenswrapper[26425]: I0217 15:45:25.783104 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 15:45:25.783355 master-0 kubenswrapper[26425]: I0217 15:45:25.783310 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 15:45:25.783572 master-0 kubenswrapper[26425]: I0217 15:45:25.783536 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 15:45:25.793638 master-0 kubenswrapper[26425]: I0217 15:45:25.793593 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.915335 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dc68acf-40ce-41a7-8633-6f19a9382a89-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.915613 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.915695 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e758de4e-c517-4fee-b541-38ade33945a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4466c186-cecf-490b-be6a-aa7a9df1b304\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.915721 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.915841 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916073 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dc68acf-40ce-41a7-8633-6f19a9382a89-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916126 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916418 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916484 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916517 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:25.917536 master-0 kubenswrapper[26425]: I0217 15:45:25.916559 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622xq\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-kube-api-access-622xq\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.018636 master-0 kubenswrapper[26425]: I0217 15:45:26.018549 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.018636 master-0 kubenswrapper[26425]: I0217 15:45:26.018630 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e758de4e-c517-4fee-b541-38ade33945a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4466c186-cecf-490b-be6a-aa7a9df1b304\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.018636 master-0 kubenswrapper[26425]: I0217 15:45:26.018647 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019299 master-0 kubenswrapper[26425]: I0217 15:45:26.018673 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019299 master-0 kubenswrapper[26425]: I0217 15:45:26.018717 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dc68acf-40ce-41a7-8633-6f19a9382a89-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019299 master-0 kubenswrapper[26425]: I0217 15:45:26.018733 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019581 master-0 kubenswrapper[26425]: I0217 15:45:26.019538 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019685 master-0 kubenswrapper[26425]: I0217 15:45:26.019553 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019806 master-0 kubenswrapper[26425]: I0217 15:45:26.019778 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019868 master-0 kubenswrapper[26425]: I0217 15:45:26.019816 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.019868 master-0 kubenswrapper[26425]: I0217 15:45:26.019851 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622xq\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-kube-api-access-622xq\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.020035 master-0 kubenswrapper[26425]: I0217 15:45:26.019898 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dc68acf-40ce-41a7-8633-6f19a9382a89-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.020152 master-0 kubenswrapper[26425]: I0217 15:45:26.020108 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.022902 master-0 kubenswrapper[26425]: I0217 15:45:26.022817 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.023090 master-0 kubenswrapper[26425]: I0217 15:45:26.023069 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.023756 master-0 kubenswrapper[26425]: I0217 15:45:26.023691 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dc68acf-40ce-41a7-8633-6f19a9382a89-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.025706 master-0 kubenswrapper[26425]: I0217 15:45:26.025681 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dc68acf-40ce-41a7-8633-6f19a9382a89-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.026022 master-0 kubenswrapper[26425]: I0217 15:45:26.026005 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:26.026080 master-0 kubenswrapper[26425]: I0217 15:45:26.026029 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e758de4e-c517-4fee-b541-38ade33945a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4466c186-cecf-490b-be6a-aa7a9df1b304\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f7f6f638ca89912a050bb8333a35b60356f4736d2f11b4cdfc966974694d9683/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.039065 master-0 kubenswrapper[26425]: I0217 15:45:26.038441 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.039065 master-0 kubenswrapper[26425]: I0217 15:45:26.038768 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dc68acf-40ce-41a7-8633-6f19a9382a89-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.039065 master-0 kubenswrapper[26425]: I0217 15:45:26.038804 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.043593 master-0 kubenswrapper[26425]: I0217 15:45:26.043564 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622xq\" (UniqueName: \"kubernetes.io/projected/3dc68acf-40ce-41a7-8633-6f19a9382a89-kube-api-access-622xq\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:26.500394 master-0 kubenswrapper[26425]: I0217 15:45:26.463432 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 15:45:26.500394 master-0 kubenswrapper[26425]: I0217 15:45:26.477584 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 15:45:26.500394 master-0 kubenswrapper[26425]: I0217 15:45:26.482854 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 15:45:26.519481 master-0 kubenswrapper[26425]: I0217 15:45:26.517070 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 15:45:26.544483 master-0 kubenswrapper[26425]: I0217 15:45:26.540721 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 15:45:26.549492 master-0 kubenswrapper[26425]: I0217 15:45:26.549432 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 15:45:26.642724 master-0 kubenswrapper[26425]: I0217 15:45:26.641969 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 15:45:26.645655 master-0 kubenswrapper[26425]: I0217 15:45:26.645608 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.645854 master-0 kubenswrapper[26425]: I0217 15:45:26.645702 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-config-data\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.645854 master-0 kubenswrapper[26425]: I0217 15:45:26.645766 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.645854 master-0 kubenswrapper[26425]: I0217 15:45:26.645798 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kolla-config\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.646008 master-0 kubenswrapper[26425]: I0217 15:45:26.645962 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.646162 master-0 kubenswrapper[26425]: I0217 15:45:26.646129 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkp5w\" (UniqueName: \"kubernetes.io/projected/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kube-api-access-xkp5w\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.649231 master-0 kubenswrapper[26425]: I0217 15:45:26.649177 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 15:45:26.649231 master-0 kubenswrapper[26425]: I0217 15:45:26.649225 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 15:45:26.649490 master-0 kubenswrapper[26425]: I0217 15:45:26.649475 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 15:45:26.649702 master-0 kubenswrapper[26425]: I0217 15:45:26.649615 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 15:45:26.649822 master-0 kubenswrapper[26425]: I0217 15:45:26.649735 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 15:45:26.649990 master-0 kubenswrapper[26425]: I0217 15:45:26.649930 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 15:45:26.681353 master-0 kubenswrapper[26425]: I0217 15:45:26.680324 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 15:45:26.749481 master-0 kubenswrapper[26425]: I0217 15:45:26.749394 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.749481 master-0 kubenswrapper[26425]: I0217 15:45:26.749480 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47ff1353-8a7c-4230-885c-ac774bd86eb6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^74e3af4c-4088-41ae-85f4-fcdcf3c4720d\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749556 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shxk\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-kube-api-access-6shxk\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749628 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749665 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kolla-config\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749732 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749925 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.749977 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.750010 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.750051 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.750099 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750359 master-0 kubenswrapper[26425]: I0217 15:45:26.750153 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkp5w\" (UniqueName: \"kubernetes.io/projected/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kube-api-access-xkp5w\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.750877 master-0 kubenswrapper[26425]: I0217 15:45:26.750630 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kolla-config\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.750877 master-0 kubenswrapper[26425]: I0217 15:45:26.750740 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.750877 master-0 kubenswrapper[26425]: I0217 15:45:26.750831 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.751050 master-0 kubenswrapper[26425]: I0217 15:45:26.750941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.751050 master-0 kubenswrapper[26425]: I0217 15:45:26.750978 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-config-data\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.752288 master-0 kubenswrapper[26425]: I0217 15:45:26.751919 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b111ae-7c7d-499a-a124-c0e76e2603a6-config-data\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.755641 master-0 kubenswrapper[26425]: I0217 15:45:26.755571 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.757317 master-0 kubenswrapper[26425]: I0217 15:45:26.757276 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b111ae-7c7d-499a-a124-c0e76e2603a6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.766419 master-0 kubenswrapper[26425]: I0217 15:45:26.766352 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkp5w\" (UniqueName: \"kubernetes.io/projected/e0b111ae-7c7d-499a-a124-c0e76e2603a6-kube-api-access-xkp5w\") pod \"memcached-0\" (UID: \"e0b111ae-7c7d-499a-a124-c0e76e2603a6\") " pod="openstack/memcached-0" Feb 17 15:45:26.852543 master-0 kubenswrapper[26425]: I0217 15:45:26.852436 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.852543 master-0 kubenswrapper[26425]: I0217 15:45:26.852528 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.852543 master-0 kubenswrapper[26425]: I0217 15:45:26.852556 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852618 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852681 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852709 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852748 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852775 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852794 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47ff1353-8a7c-4230-885c-ac774bd86eb6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^74e3af4c-4088-41ae-85f4-fcdcf3c4720d\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852820 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shxk\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-kube-api-access-6shxk\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.853232 master-0 kubenswrapper[26425]: I0217 15:45:26.852844 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.854854 master-0 kubenswrapper[26425]: I0217 15:45:26.854819 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.855771 master-0 kubenswrapper[26425]: I0217 15:45:26.855707 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.856336 master-0 kubenswrapper[26425]: I0217 15:45:26.856307 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 15:45:26.856412 master-0 kubenswrapper[26425]: I0217 15:45:26.856371 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.856478 master-0 kubenswrapper[26425]: I0217 15:45:26.856432 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.858213 master-0 kubenswrapper[26425]: I0217 15:45:26.858048 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.859172 master-0 kubenswrapper[26425]: I0217 15:45:26.859119 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.859981 master-0 kubenswrapper[26425]: I0217 15:45:26.859847 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.861356 master-0 kubenswrapper[26425]: I0217 15:45:26.861316 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:26.861425 master-0 kubenswrapper[26425]: I0217 15:45:26.861372 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47ff1353-8a7c-4230-885c-ac774bd86eb6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^74e3af4c-4088-41ae-85f4-fcdcf3c4720d\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/80ce81cb226471582f9136813e3e0fb04aa5a64c762aa72b383c471219b69c76/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.865810 master-0 kubenswrapper[26425]: I0217 15:45:26.865774 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.878780 master-0 kubenswrapper[26425]: I0217 15:45:26.878625 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shxk\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-kube-api-access-6shxk\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:26.886202 master-0 kubenswrapper[26425]: I0217 15:45:26.886145 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f67d3cf-a7f4-4ead-9b78-4a247036b3d5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:27.650920 master-0 kubenswrapper[26425]: I0217 15:45:27.650859 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e758de4e-c517-4fee-b541-38ade33945a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4466c186-cecf-490b-be6a-aa7a9df1b304\") pod \"rabbitmq-cell1-server-0\" (UID: \"3dc68acf-40ce-41a7-8633-6f19a9382a89\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:27.906595 master-0 kubenswrapper[26425]: I0217 15:45:27.906529 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 15:45:27.908951 master-0 kubenswrapper[26425]: I0217 15:45:27.908841 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 15:45:27.918030 master-0 kubenswrapper[26425]: I0217 15:45:27.917950 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:45:27.919399 master-0 kubenswrapper[26425]: I0217 15:45:27.919365 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 15:45:27.919528 master-0 kubenswrapper[26425]: I0217 15:45:27.919481 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 15:45:27.919596 master-0 kubenswrapper[26425]: I0217 15:45:27.919535 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 15:45:27.932296 master-0 kubenswrapper[26425]: I0217 15:45:27.929156 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.976580 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.976668 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.976705 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-94a0bc6e-ff15-42b7-ae6a-11223236c92d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ebaa31cd-130c-4158-8fa8-fc11b366a1e5\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.976817 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.977031 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.977146 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rp46\" (UniqueName: \"kubernetes.io/projected/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kube-api-access-6rp46\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.977213 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:27.978557 master-0 kubenswrapper[26425]: I0217 15:45:27.977321 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.078653 master-0 kubenswrapper[26425]: I0217 15:45:28.078567 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.078897 master-0 kubenswrapper[26425]: I0217 15:45:28.078668 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rp46\" (UniqueName: \"kubernetes.io/projected/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kube-api-access-6rp46\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.078897 master-0 kubenswrapper[26425]: I0217 15:45:28.078703 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.078897 master-0 kubenswrapper[26425]: I0217 15:45:28.078822 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079127 master-0 kubenswrapper[26425]: I0217 15:45:28.078911 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079127 master-0 kubenswrapper[26425]: I0217 15:45:28.078955 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079127 master-0 kubenswrapper[26425]: I0217 15:45:28.079000 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-94a0bc6e-ff15-42b7-ae6a-11223236c92d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ebaa31cd-130c-4158-8fa8-fc11b366a1e5\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079127 master-0 kubenswrapper[26425]: I0217 15:45:28.079026 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079520 master-0 kubenswrapper[26425]: I0217 15:45:28.079252 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.079967 master-0 kubenswrapper[26425]: I0217 15:45:28.079931 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.081939 master-0 kubenswrapper[26425]: I0217 15:45:28.081375 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.082669 master-0 kubenswrapper[26425]: I0217 15:45:28.082417 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.083578 master-0 kubenswrapper[26425]: I0217 15:45:28.083545 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:28.083634 master-0 kubenswrapper[26425]: I0217 15:45:28.083578 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-94a0bc6e-ff15-42b7-ae6a-11223236c92d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ebaa31cd-130c-4158-8fa8-fc11b366a1e5\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/45a7bc633150d4b34e5246d2256bc68a9c1c436f4717df777651d5e04bc8f3aa/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 15:45:28.084223 master-0 kubenswrapper[26425]: I0217 15:45:28.084182 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.084947 master-0 kubenswrapper[26425]: I0217 15:45:28.084910 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.106888 master-0 kubenswrapper[26425]: I0217 15:45:28.106835 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rp46\" (UniqueName: \"kubernetes.io/projected/ac242660-f8e4-4dcd-a723-5dcfd0d861fb-kube-api-access-6rp46\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:28.326006 master-0 kubenswrapper[26425]: I0217 15:45:28.325822 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 15:45:28.328219 master-0 kubenswrapper[26425]: I0217 15:45:28.328187 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.335569 master-0 kubenswrapper[26425]: I0217 15:45:28.335512 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 15:45:28.336964 master-0 kubenswrapper[26425]: I0217 15:45:28.335766 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 15:45:28.338179 master-0 kubenswrapper[26425]: I0217 15:45:28.338119 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 15:45:28.378574 master-0 kubenswrapper[26425]: I0217 15:45:28.378515 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 15:45:28.590802 master-0 kubenswrapper[26425]: I0217 15:45:28.589135 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.590802 master-0 kubenswrapper[26425]: I0217 15:45:28.589190 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpkz\" (UniqueName: \"kubernetes.io/projected/046f897b-506a-4978-b9cd-07283f1e3057-kube-api-access-qfpkz\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.590802 master-0 kubenswrapper[26425]: I0217 15:45:28.589251 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.590802 master-0 kubenswrapper[26425]: I0217 15:45:28.589278 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/046f897b-506a-4978-b9cd-07283f1e3057-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.590802 master-0 kubenswrapper[26425]: I0217 15:45:28.589306 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de643740-318b-440f-840a-7220194fa0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150487da-1bfd-439a-b7f5-b1fe99892998\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.591596 master-0 kubenswrapper[26425]: I0217 15:45:28.591522 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.591749 master-0 kubenswrapper[26425]: I0217 15:45:28.591696 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.591814 master-0 kubenswrapper[26425]: I0217 15:45:28.591789 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693183 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693232 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpkz\" (UniqueName: \"kubernetes.io/projected/046f897b-506a-4978-b9cd-07283f1e3057-kube-api-access-qfpkz\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693273 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693305 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/046f897b-506a-4978-b9cd-07283f1e3057-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693324 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-de643740-318b-440f-840a-7220194fa0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150487da-1bfd-439a-b7f5-b1fe99892998\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693702 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693774 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693806 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.694421 master-0 kubenswrapper[26425]: I0217 15:45:28.693973 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/046f897b-506a-4978-b9cd-07283f1e3057-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.696444 master-0 kubenswrapper[26425]: I0217 15:45:28.694843 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.696444 master-0 kubenswrapper[26425]: I0217 15:45:28.695031 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.698862 master-0 kubenswrapper[26425]: I0217 15:45:28.698830 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:28.699102 master-0 kubenswrapper[26425]: I0217 15:45:28.698874 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-de643740-318b-440f-840a-7220194fa0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150487da-1bfd-439a-b7f5-b1fe99892998\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/530797b597402f45b5b01ae4daa015b2cb777cad8cf9c23c0d253b679549ab4a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.701482 master-0 kubenswrapper[26425]: I0217 15:45:28.701398 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/046f897b-506a-4978-b9cd-07283f1e3057-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.701931 master-0 kubenswrapper[26425]: I0217 15:45:28.701893 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.702761 master-0 kubenswrapper[26425]: I0217 15:45:28.702731 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/046f897b-506a-4978-b9cd-07283f1e3057-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:28.715624 master-0 kubenswrapper[26425]: I0217 15:45:28.715511 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpkz\" (UniqueName: \"kubernetes.io/projected/046f897b-506a-4978-b9cd-07283f1e3057-kube-api-access-qfpkz\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:29.033225 master-0 kubenswrapper[26425]: I0217 15:45:29.033178 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47ff1353-8a7c-4230-885c-ac774bd86eb6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^74e3af4c-4088-41ae-85f4-fcdcf3c4720d\") pod \"rabbitmq-server-0\" (UID: \"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5\") " pod="openstack/rabbitmq-server-0" Feb 17 15:45:29.129929 master-0 kubenswrapper[26425]: I0217 15:45:29.129877 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 15:45:30.052623 master-0 kubenswrapper[26425]: I0217 15:45:30.052350 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-94a0bc6e-ff15-42b7-ae6a-11223236c92d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ebaa31cd-130c-4158-8fa8-fc11b366a1e5\") pod \"openstack-galera-0\" (UID: \"ac242660-f8e4-4dcd-a723-5dcfd0d861fb\") " pod="openstack/openstack-galera-0" Feb 17 15:45:30.352152 master-0 kubenswrapper[26425]: I0217 15:45:30.351973 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 15:45:31.173237 master-0 kubenswrapper[26425]: I0217 15:45:31.172784 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-de643740-318b-440f-840a-7220194fa0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150487da-1bfd-439a-b7f5-b1fe99892998\") pod \"openstack-cell1-galera-0\" (UID: \"046f897b-506a-4978-b9cd-07283f1e3057\") " pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:31.387047 master-0 kubenswrapper[26425]: I0217 15:45:31.386953 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 15:45:31.756990 master-0 kubenswrapper[26425]: I0217 15:45:31.756850 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hdbmn"] Feb 17 15:45:31.758251 master-0 kubenswrapper[26425]: I0217 15:45:31.758220 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.762985 master-0 kubenswrapper[26425]: I0217 15:45:31.760670 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 15:45:31.762985 master-0 kubenswrapper[26425]: I0217 15:45:31.760722 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 15:45:31.806700 master-0 kubenswrapper[26425]: I0217 15:45:31.800583 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdbmn"] Feb 17 15:45:31.846330 master-0 kubenswrapper[26425]: I0217 15:45:31.846260 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fxgqd"] Feb 17 15:45:31.849564 master-0 kubenswrapper[26425]: I0217 15:45:31.849526 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:31.860187 master-0 kubenswrapper[26425]: I0217 15:45:31.858994 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fxgqd"] Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910092 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-log-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910151 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b83eed22-dd59-4e1d-91c1-fed8bead5b05-scripts\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910184 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-combined-ca-bundle\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910268 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910338 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cps9\" (UniqueName: \"kubernetes.io/projected/b83eed22-dd59-4e1d-91c1-fed8bead5b05-kube-api-access-9cps9\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910383 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:31.910630 master-0 kubenswrapper[26425]: I0217 15:45:31.910493 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-ovn-controller-tls-certs\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012076 master-0 kubenswrapper[26425]: I0217 15:45:32.011943 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012076 master-0 kubenswrapper[26425]: I0217 15:45:32.012030 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a673e0a2-e190-4228-8263-de2cdc13293c-scripts\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012076 master-0 kubenswrapper[26425]: I0217 15:45:32.012074 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cps9\" (UniqueName: \"kubernetes.io/projected/b83eed22-dd59-4e1d-91c1-fed8bead5b05-kube-api-access-9cps9\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012114 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x9bm\" (UniqueName: \"kubernetes.io/projected/a673e0a2-e190-4228-8263-de2cdc13293c-kube-api-access-9x9bm\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012141 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012191 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-etc-ovs\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012216 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-run\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012235 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-log\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012260 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-lib\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012286 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-ovn-controller-tls-certs\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012381 master-0 kubenswrapper[26425]: I0217 15:45:32.012360 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-log-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012707 master-0 kubenswrapper[26425]: I0217 15:45:32.012388 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b83eed22-dd59-4e1d-91c1-fed8bead5b05-scripts\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.012707 master-0 kubenswrapper[26425]: I0217 15:45:32.012418 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-combined-ca-bundle\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.013114 master-0 kubenswrapper[26425]: I0217 15:45:32.012684 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.013369 master-0 kubenswrapper[26425]: I0217 15:45:32.013315 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-run\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.013619 master-0 kubenswrapper[26425]: I0217 15:45:32.013585 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b83eed22-dd59-4e1d-91c1-fed8bead5b05-var-log-ovn\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.027480 master-0 kubenswrapper[26425]: I0217 15:45:32.018201 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-combined-ca-bundle\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.027480 master-0 kubenswrapper[26425]: I0217 15:45:32.019082 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83eed22-dd59-4e1d-91c1-fed8bead5b05-ovn-controller-tls-certs\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.027480 master-0 kubenswrapper[26425]: I0217 15:45:32.019973 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b83eed22-dd59-4e1d-91c1-fed8bead5b05-scripts\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.041552 master-0 kubenswrapper[26425]: I0217 15:45:32.037515 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cps9\" (UniqueName: \"kubernetes.io/projected/b83eed22-dd59-4e1d-91c1-fed8bead5b05-kube-api-access-9cps9\") pod \"ovn-controller-hdbmn\" (UID: \"b83eed22-dd59-4e1d-91c1-fed8bead5b05\") " pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.114036 master-0 kubenswrapper[26425]: I0217 15:45:32.113973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-etc-ovs\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114036 master-0 kubenswrapper[26425]: I0217 15:45:32.114032 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-run\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114363 master-0 kubenswrapper[26425]: I0217 15:45:32.114167 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-log\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114363 master-0 kubenswrapper[26425]: I0217 15:45:32.114224 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-run\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114363 master-0 kubenswrapper[26425]: I0217 15:45:32.114243 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-lib\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114363 master-0 kubenswrapper[26425]: I0217 15:45:32.114344 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-etc-ovs\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114567 master-0 kubenswrapper[26425]: I0217 15:45:32.114449 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-log\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114613 master-0 kubenswrapper[26425]: I0217 15:45:32.114573 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a673e0a2-e190-4228-8263-de2cdc13293c-var-lib\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.114910 master-0 kubenswrapper[26425]: I0217 15:45:32.114871 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a673e0a2-e190-4228-8263-de2cdc13293c-scripts\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.116347 master-0 kubenswrapper[26425]: I0217 15:45:32.115010 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x9bm\" (UniqueName: \"kubernetes.io/projected/a673e0a2-e190-4228-8263-de2cdc13293c-kube-api-access-9x9bm\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.117271 master-0 kubenswrapper[26425]: I0217 15:45:32.117223 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a673e0a2-e190-4228-8263-de2cdc13293c-scripts\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.125050 master-0 kubenswrapper[26425]: I0217 15:45:32.124972 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn" Feb 17 15:45:32.137498 master-0 kubenswrapper[26425]: I0217 15:45:32.133330 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x9bm\" (UniqueName: \"kubernetes.io/projected/a673e0a2-e190-4228-8263-de2cdc13293c-kube-api-access-9x9bm\") pod \"ovn-controller-ovs-fxgqd\" (UID: \"a673e0a2-e190-4228-8263-de2cdc13293c\") " pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:32.165045 master-0 kubenswrapper[26425]: I0217 15:45:32.164788 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:45:33.696105 master-0 kubenswrapper[26425]: I0217 15:45:33.693649 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 15:45:33.696105 master-0 kubenswrapper[26425]: I0217 15:45:33.695698 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.701061 master-0 kubenswrapper[26425]: I0217 15:45:33.701000 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 15:45:33.701396 master-0 kubenswrapper[26425]: I0217 15:45:33.701332 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 15:45:33.701486 master-0 kubenswrapper[26425]: I0217 15:45:33.701398 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 15:45:33.701891 master-0 kubenswrapper[26425]: I0217 15:45:33.701854 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 15:45:33.714809 master-0 kubenswrapper[26425]: I0217 15:45:33.714744 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 15:45:33.870937 master-0 kubenswrapper[26425]: I0217 15:45:33.870860 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.870937 master-0 kubenswrapper[26425]: I0217 15:45:33.870940 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpm7\" (UniqueName: \"kubernetes.io/projected/892a05fd-9a7a-44db-8b41-98e748414a9c-kube-api-access-hzpm7\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871010 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871050 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871111 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871137 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871158 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.871216 master-0 kubenswrapper[26425]: I0217 15:45:33.871192 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f89f539-a4f5-4f3d-b3f7-a3e8da3a6bf8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^82a72316-4792-40fb-9153-9e27d704eadd\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.973391 master-0 kubenswrapper[26425]: I0217 15:45:33.973270 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.973790 master-0 kubenswrapper[26425]: I0217 15:45:33.973773 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.973902 master-0 kubenswrapper[26425]: I0217 15:45:33.973885 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.974008 master-0 kubenswrapper[26425]: I0217 15:45:33.973995 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f89f539-a4f5-4f3d-b3f7-a3e8da3a6bf8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^82a72316-4792-40fb-9153-9e27d704eadd\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.974141 master-0 kubenswrapper[26425]: I0217 15:45:33.974128 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.974935 master-0 kubenswrapper[26425]: I0217 15:45:33.974882 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzpm7\" (UniqueName: \"kubernetes.io/projected/892a05fd-9a7a-44db-8b41-98e748414a9c-kube-api-access-hzpm7\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.975055 master-0 kubenswrapper[26425]: I0217 15:45:33.975043 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.975207 master-0 kubenswrapper[26425]: I0217 15:45:33.975186 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.975446 master-0 kubenswrapper[26425]: I0217 15:45:33.975409 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.975962 master-0 kubenswrapper[26425]: I0217 15:45:33.975933 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:33.976017 master-0 kubenswrapper[26425]: I0217 15:45:33.975972 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f89f539-a4f5-4f3d-b3f7-a3e8da3a6bf8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^82a72316-4792-40fb-9153-9e27d704eadd\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4f097623552a425b84c5d35beba30f4e400860108258dc8b5b2b920a17a052cc/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.979747 master-0 kubenswrapper[26425]: I0217 15:45:33.977124 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.979747 master-0 kubenswrapper[26425]: I0217 15:45:33.979559 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.980049 master-0 kubenswrapper[26425]: I0217 15:45:33.979988 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.982784 master-0 kubenswrapper[26425]: I0217 15:45:33.982763 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892a05fd-9a7a-44db-8b41-98e748414a9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.990224 master-0 kubenswrapper[26425]: I0217 15:45:33.990156 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892a05fd-9a7a-44db-8b41-98e748414a9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:33.999694 master-0 kubenswrapper[26425]: I0217 15:45:33.999651 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzpm7\" (UniqueName: \"kubernetes.io/projected/892a05fd-9a7a-44db-8b41-98e748414a9c-kube-api-access-hzpm7\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:35.593955 master-0 kubenswrapper[26425]: I0217 15:45:35.593907 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f89f539-a4f5-4f3d-b3f7-a3e8da3a6bf8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^82a72316-4792-40fb-9153-9e27d704eadd\") pod \"ovsdbserver-nb-0\" (UID: \"892a05fd-9a7a-44db-8b41-98e748414a9c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:35.827434 master-0 kubenswrapper[26425]: I0217 15:45:35.827310 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 15:45:38.608560 master-0 kubenswrapper[26425]: I0217 15:45:38.607775 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 15:45:38.611062 master-0 kubenswrapper[26425]: I0217 15:45:38.610244 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:38.612292 master-0 kubenswrapper[26425]: I0217 15:45:38.612185 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 15:45:38.612974 master-0 kubenswrapper[26425]: I0217 15:45:38.612923 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 15:45:38.614160 master-0 kubenswrapper[26425]: I0217 15:45:38.614061 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 15:45:40.190055 master-0 kubenswrapper[26425]: I0217 15:45:40.189962 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 15:45:41.951822 master-0 kubenswrapper[26425]: I0217 15:45:41.951603 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9ck\" (UniqueName: \"kubernetes.io/projected/6001360b-0db0-4c81-8226-352e7f623535-kube-api-access-gd9ck\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.951904 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6001360b-0db0-4c81-8226-352e7f623535-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952030 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-48760907-599c-4e44-af12-39c3c5bafb5d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7c37b5c-4a68-41fa-8273-4dbabf6a6bb2\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952076 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952296 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952345 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952549 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-config\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:41.953263 master-0 kubenswrapper[26425]: I0217 15:45:41.952710 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.054707 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6001360b-0db0-4c81-8226-352e7f623535-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.054807 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.055207 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.055251 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.055301 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6001360b-0db0-4c81-8226-352e7f623535-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.055388 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-config\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.055519 master-0 kubenswrapper[26425]: I0217 15:45:42.055513 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.056022 master-0 kubenswrapper[26425]: I0217 15:45:42.055638 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd9ck\" (UniqueName: \"kubernetes.io/projected/6001360b-0db0-4c81-8226-352e7f623535-kube-api-access-gd9ck\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.058119 master-0 kubenswrapper[26425]: I0217 15:45:42.056370 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-config\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.058119 master-0 kubenswrapper[26425]: I0217 15:45:42.057003 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6001360b-0db0-4c81-8226-352e7f623535-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.063058 master-0 kubenswrapper[26425]: I0217 15:45:42.062750 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.064357 master-0 kubenswrapper[26425]: I0217 15:45:42.064308 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:42.064623 master-0 kubenswrapper[26425]: I0217 15:45:42.064571 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6001360b-0db0-4c81-8226-352e7f623535-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:45.862776 master-0 kubenswrapper[26425]: I0217 15:45:45.862724 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-48760907-599c-4e44-af12-39c3c5bafb5d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7c37b5c-4a68-41fa-8273-4dbabf6a6bb2\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:45.865140 master-0 kubenswrapper[26425]: I0217 15:45:45.865084 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:45:45.865211 master-0 kubenswrapper[26425]: I0217 15:45:45.865175 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-48760907-599c-4e44-af12-39c3c5bafb5d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7c37b5c-4a68-41fa-8273-4dbabf6a6bb2\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/21cc5bf30ce81c2547be6cbe33fb2c45d0089b7242e3b7a186bddb7f9e96ad43/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:46.161977 master-0 kubenswrapper[26425]: I0217 15:45:46.161917 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 15:45:46.170180 master-0 kubenswrapper[26425]: I0217 15:45:46.170110 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdbmn"] Feb 17 15:45:46.171741 master-0 kubenswrapper[26425]: I0217 15:45:46.171708 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd9ck\" (UniqueName: \"kubernetes.io/projected/6001360b-0db0-4c81-8226-352e7f623535-kube-api-access-gd9ck\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:46.178373 master-0 kubenswrapper[26425]: I0217 15:45:46.178308 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 15:45:46.195478 master-0 kubenswrapper[26425]: I0217 15:45:46.194008 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 15:45:46.214882 master-0 kubenswrapper[26425]: I0217 15:45:46.214819 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 15:45:46.226632 master-0 kubenswrapper[26425]: I0217 15:45:46.225943 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 15:45:47.069952 master-0 kubenswrapper[26425]: I0217 15:45:47.069824 26425 scope.go:117] "RemoveContainer" containerID="1e7b4529083cffeef5003957eb03a7afcc09cde5e715114a3708977a54e19b17" Feb 17 15:45:47.564739 master-0 kubenswrapper[26425]: I0217 15:45:47.564666 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-48760907-599c-4e44-af12-39c3c5bafb5d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b7c37b5c-4a68-41fa-8273-4dbabf6a6bb2\") pod \"ovsdbserver-sb-0\" (UID: \"6001360b-0db0-4c81-8226-352e7f623535\") " pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:50.262497 master-0 kubenswrapper[26425]: W0217 15:45:50.262404 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f67d3cf_a7f4_4ead_9b78_4a247036b3d5.slice/crio-ff2e130a890ff46e0f66faad87c296be2e53a35123e2a42419af69a7d88e8207 WatchSource:0}: Error finding container ff2e130a890ff46e0f66faad87c296be2e53a35123e2a42419af69a7d88e8207: Status 404 returned error can't find the container with id ff2e130a890ff46e0f66faad87c296be2e53a35123e2a42419af69a7d88e8207 Feb 17 15:45:50.265157 master-0 kubenswrapper[26425]: W0217 15:45:50.265084 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod046f897b_506a_4978_b9cd_07283f1e3057.slice/crio-fa46f8d7016eacc3593e1eea2c66b7715a31265568035fcd9fbe5ca3c9bd8bc2 WatchSource:0}: Error finding container fa46f8d7016eacc3593e1eea2c66b7715a31265568035fcd9fbe5ca3c9bd8bc2: Status 404 returned error can't find the container with id fa46f8d7016eacc3593e1eea2c66b7715a31265568035fcd9fbe5ca3c9bd8bc2 Feb 17 15:45:50.270914 master-0 kubenswrapper[26425]: W0217 15:45:50.270695 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc68acf_40ce_41a7_8633_6f19a9382a89.slice/crio-d7515ec2416b638c10a129fe1ec9ca712c37c4ec75d4c87d1dca4ac37a74d39e WatchSource:0}: Error finding container d7515ec2416b638c10a129fe1ec9ca712c37c4ec75d4c87d1dca4ac37a74d39e: Status 404 returned error can't find the container with id d7515ec2416b638c10a129fe1ec9ca712c37c4ec75d4c87d1dca4ac37a74d39e Feb 17 15:45:50.272367 master-0 kubenswrapper[26425]: W0217 15:45:50.272334 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb83eed22_dd59_4e1d_91c1_fed8bead5b05.slice/crio-cd7e76c3dae80957346f32f5516cf7b6b6721658ce6dc53f4e84159887145600 WatchSource:0}: Error finding container cd7e76c3dae80957346f32f5516cf7b6b6721658ce6dc53f4e84159887145600: Status 404 returned error can't find the container with id cd7e76c3dae80957346f32f5516cf7b6b6721658ce6dc53f4e84159887145600 Feb 17 15:45:50.280781 master-0 kubenswrapper[26425]: W0217 15:45:50.280715 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac242660_f8e4_4dcd_a723_5dcfd0d861fb.slice/crio-cee85788dfe0c95ed1c97fb530e39cee9b490d8804773199c71df723e04189ce WatchSource:0}: Error finding container cee85788dfe0c95ed1c97fb530e39cee9b490d8804773199c71df723e04189ce: Status 404 returned error can't find the container with id cee85788dfe0c95ed1c97fb530e39cee9b490d8804773199c71df723e04189ce Feb 17 15:45:50.286889 master-0 kubenswrapper[26425]: W0217 15:45:50.286826 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b111ae_7c7d_499a_a124_c0e76e2603a6.slice/crio-0070026e3809cc5357a86c2e9e84806ea1cc451b700025dc7bcd2fc8bea039cd WatchSource:0}: Error finding container 0070026e3809cc5357a86c2e9e84806ea1cc451b700025dc7bcd2fc8bea039cd: Status 404 returned error can't find the container with id 0070026e3809cc5357a86c2e9e84806ea1cc451b700025dc7bcd2fc8bea039cd Feb 17 15:45:50.599964 master-0 kubenswrapper[26425]: I0217 15:45:50.599820 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"046f897b-506a-4978-b9cd-07283f1e3057","Type":"ContainerStarted","Data":"fa46f8d7016eacc3593e1eea2c66b7715a31265568035fcd9fbe5ca3c9bd8bc2"} Feb 17 15:45:50.602148 master-0 kubenswrapper[26425]: I0217 15:45:50.602100 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn" event={"ID":"b83eed22-dd59-4e1d-91c1-fed8bead5b05","Type":"ContainerStarted","Data":"cd7e76c3dae80957346f32f5516cf7b6b6721658ce6dc53f4e84159887145600"} Feb 17 15:45:50.603318 master-0 kubenswrapper[26425]: I0217 15:45:50.603281 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e0b111ae-7c7d-499a-a124-c0e76e2603a6","Type":"ContainerStarted","Data":"0070026e3809cc5357a86c2e9e84806ea1cc451b700025dc7bcd2fc8bea039cd"} Feb 17 15:45:50.604275 master-0 kubenswrapper[26425]: I0217 15:45:50.604233 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac242660-f8e4-4dcd-a723-5dcfd0d861fb","Type":"ContainerStarted","Data":"cee85788dfe0c95ed1c97fb530e39cee9b490d8804773199c71df723e04189ce"} Feb 17 15:45:50.605639 master-0 kubenswrapper[26425]: I0217 15:45:50.605607 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5","Type":"ContainerStarted","Data":"ff2e130a890ff46e0f66faad87c296be2e53a35123e2a42419af69a7d88e8207"} Feb 17 15:45:50.607907 master-0 kubenswrapper[26425]: I0217 15:45:50.607870 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3dc68acf-40ce-41a7-8633-6f19a9382a89","Type":"ContainerStarted","Data":"d7515ec2416b638c10a129fe1ec9ca712c37c4ec75d4c87d1dca4ac37a74d39e"} Feb 17 15:45:50.629800 master-0 kubenswrapper[26425]: I0217 15:45:50.629735 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 15:45:51.628656 master-0 kubenswrapper[26425]: I0217 15:45:51.628577 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 15:45:51.795476 master-0 kubenswrapper[26425]: I0217 15:45:51.795376 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fxgqd"] Feb 17 15:45:51.814727 master-0 kubenswrapper[26425]: W0217 15:45:51.814672 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda673e0a2_e190_4228_8263_de2cdc13293c.slice/crio-d33ac15e1bb544457c214d5f03be18d09da45bb11e829464c1c942862d8aba1a WatchSource:0}: Error finding container d33ac15e1bb544457c214d5f03be18d09da45bb11e829464c1c942862d8aba1a: Status 404 returned error can't find the container with id d33ac15e1bb544457c214d5f03be18d09da45bb11e829464c1c942862d8aba1a Feb 17 15:45:52.238556 master-0 kubenswrapper[26425]: I0217 15:45:52.238499 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 15:45:52.241105 master-0 kubenswrapper[26425]: W0217 15:45:52.240820 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6001360b_0db0_4c81_8226_352e7f623535.slice/crio-479f727c35842bbf4dc0c1affca96063e33c2c82493cdf24bed3f1393c473a62 WatchSource:0}: Error finding container 479f727c35842bbf4dc0c1affca96063e33c2c82493cdf24bed3f1393c473a62: Status 404 returned error can't find the container with id 479f727c35842bbf4dc0c1affca96063e33c2c82493cdf24bed3f1393c473a62 Feb 17 15:45:52.641303 master-0 kubenswrapper[26425]: I0217 15:45:52.640994 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"892a05fd-9a7a-44db-8b41-98e748414a9c","Type":"ContainerStarted","Data":"8e247caf0270caed9d0c6e8eeca29b35f81f1ef0e49ee030333e21635800970a"} Feb 17 15:45:52.642479 master-0 kubenswrapper[26425]: I0217 15:45:52.642349 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fxgqd" event={"ID":"a673e0a2-e190-4228-8263-de2cdc13293c","Type":"ContainerStarted","Data":"d33ac15e1bb544457c214d5f03be18d09da45bb11e829464c1c942862d8aba1a"} Feb 17 15:45:52.646014 master-0 kubenswrapper[26425]: I0217 15:45:52.645949 26425 generic.go:334] "Generic (PLEG): container finished" podID="1304b6d0-6de1-4f39-a55d-bf89c4b41d08" containerID="22f2fcf63a9adda3eadcfa1385a35bbe2c1e23da4de1ac152af62fb4c93d00da" exitCode=0 Feb 17 15:45:52.646014 master-0 kubenswrapper[26425]: I0217 15:45:52.645997 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" event={"ID":"1304b6d0-6de1-4f39-a55d-bf89c4b41d08","Type":"ContainerDied","Data":"22f2fcf63a9adda3eadcfa1385a35bbe2c1e23da4de1ac152af62fb4c93d00da"} Feb 17 15:45:52.651626 master-0 kubenswrapper[26425]: I0217 15:45:52.651583 26425 generic.go:334] "Generic (PLEG): container finished" podID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerID="d8d0159b815fde84844174c09c64d94ce8a2c3698f6f648047642cb1875c9cf6" exitCode=0 Feb 17 15:45:52.651878 master-0 kubenswrapper[26425]: I0217 15:45:52.651856 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" event={"ID":"a2122296-6151-4ec0-b71c-fd6ad516ffb4","Type":"ContainerDied","Data":"d8d0159b815fde84844174c09c64d94ce8a2c3698f6f648047642cb1875c9cf6"} Feb 17 15:45:52.656184 master-0 kubenswrapper[26425]: I0217 15:45:52.656145 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6001360b-0db0-4c81-8226-352e7f623535","Type":"ContainerStarted","Data":"479f727c35842bbf4dc0c1affca96063e33c2c82493cdf24bed3f1393c473a62"} Feb 17 15:45:52.658310 master-0 kubenswrapper[26425]: I0217 15:45:52.658285 26425 generic.go:334] "Generic (PLEG): container finished" podID="cb161df9-6094-4ed3-8a36-06a828cd1674" containerID="b7eb29e9a37a840b6e9864e570a4e4c375af6f660033f93bc488d5c364dd7d23" exitCode=0 Feb 17 15:45:52.658418 master-0 kubenswrapper[26425]: I0217 15:45:52.658329 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" event={"ID":"cb161df9-6094-4ed3-8a36-06a828cd1674","Type":"ContainerDied","Data":"b7eb29e9a37a840b6e9864e570a4e4c375af6f660033f93bc488d5c364dd7d23"} Feb 17 15:45:52.694715 master-0 kubenswrapper[26425]: I0217 15:45:52.694664 26425 generic.go:334] "Generic (PLEG): container finished" podID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerID="1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593" exitCode=0 Feb 17 15:45:52.694827 master-0 kubenswrapper[26425]: I0217 15:45:52.694722 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" event={"ID":"f7825929-3b0c-402f-9c91-3f6a0e438ea3","Type":"ContainerDied","Data":"1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593"} Feb 17 15:45:52.910703 master-0 kubenswrapper[26425]: E0217 15:45:52.910631 26425 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 17 15:45:52.910703 master-0 kubenswrapper[26425]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f7825929-3b0c-402f-9c91-3f6a0e438ea3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 15:45:52.910703 master-0 kubenswrapper[26425]: > podSandboxID="832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e" Feb 17 15:45:52.910918 master-0 kubenswrapper[26425]: E0217 15:45:52.910800 26425 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 17 15:45:52.910918 master-0 kubenswrapper[26425]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d7h64dhb8hb8h587h59ch664h5c7h56dh67ch657h657h5fbh5chd8h9hcfh645h594h59ch565h669h648h5d5h8ch597h58bhd5h6fh67dh589hd4q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7qhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-75b66f9649-znfnp_openstack(f7825929-3b0c-402f-9c91-3f6a0e438ea3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f7825929-3b0c-402f-9c91-3f6a0e438ea3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 15:45:52.910918 master-0 kubenswrapper[26425]: > logger="UnhandledError" Feb 17 15:45:52.911994 master-0 kubenswrapper[26425]: E0217 15:45:52.911944 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f7825929-3b0c-402f-9c91-3f6a0e438ea3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" Feb 17 15:45:53.713486 master-0 kubenswrapper[26425]: I0217 15:45:53.713163 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" event={"ID":"a2122296-6151-4ec0-b71c-fd6ad516ffb4","Type":"ContainerStarted","Data":"1864e8a47379b369d8a66077175769f37b5a488774750f04959a1eeab4ee3e75"} Feb 17 15:45:53.787687 master-0 kubenswrapper[26425]: I0217 15:45:53.787526 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" podStartSLOduration=3.615305358 podStartE2EDuration="31.78750579s" podCreationTimestamp="2026-02-17 15:45:22 +0000 UTC" firstStartedPulling="2026-02-17 15:45:23.515152433 +0000 UTC m=+1785.406876251" lastFinishedPulling="2026-02-17 15:45:51.687352865 +0000 UTC m=+1813.579076683" observedRunningTime="2026-02-17 15:45:53.766521786 +0000 UTC m=+1815.658245634" watchObservedRunningTime="2026-02-17 15:45:53.78750579 +0000 UTC m=+1815.679229608" Feb 17 15:45:54.388113 master-0 kubenswrapper[26425]: I0217 15:45:54.388053 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:54.504124 master-0 kubenswrapper[26425]: I0217 15:45:54.504068 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zc52\" (UniqueName: \"kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52\") pod \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " Feb 17 15:45:54.504360 master-0 kubenswrapper[26425]: I0217 15:45:54.504214 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config\") pod \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\" (UID: \"1304b6d0-6de1-4f39-a55d-bf89c4b41d08\") " Feb 17 15:45:54.511363 master-0 kubenswrapper[26425]: I0217 15:45:54.511308 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52" (OuterVolumeSpecName: "kube-api-access-8zc52") pod "1304b6d0-6de1-4f39-a55d-bf89c4b41d08" (UID: "1304b6d0-6de1-4f39-a55d-bf89c4b41d08"). InnerVolumeSpecName "kube-api-access-8zc52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:45:54.535952 master-0 kubenswrapper[26425]: I0217 15:45:54.535813 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config" (OuterVolumeSpecName: "config") pod "1304b6d0-6de1-4f39-a55d-bf89c4b41d08" (UID: "1304b6d0-6de1-4f39-a55d-bf89c4b41d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:45:54.607354 master-0 kubenswrapper[26425]: I0217 15:45:54.607294 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:54.607354 master-0 kubenswrapper[26425]: I0217 15:45:54.607333 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zc52\" (UniqueName: \"kubernetes.io/projected/1304b6d0-6de1-4f39-a55d-bf89c4b41d08-kube-api-access-8zc52\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:54.725745 master-0 kubenswrapper[26425]: I0217 15:45:54.725691 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" event={"ID":"1304b6d0-6de1-4f39-a55d-bf89c4b41d08","Type":"ContainerDied","Data":"c9e084988983ff3787b30566e9c15707f3d640ffde96662c74e2e45cedeb67f1"} Feb 17 15:45:54.725745 master-0 kubenswrapper[26425]: I0217 15:45:54.725723 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-tpv9d" Feb 17 15:45:54.725745 master-0 kubenswrapper[26425]: I0217 15:45:54.725745 26425 scope.go:117] "RemoveContainer" containerID="22f2fcf63a9adda3eadcfa1385a35bbe2c1e23da4de1ac152af62fb4c93d00da" Feb 17 15:45:54.726599 master-0 kubenswrapper[26425]: I0217 15:45:54.725850 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:45:54.862652 master-0 kubenswrapper[26425]: I0217 15:45:54.862579 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:54.874620 master-0 kubenswrapper[26425]: I0217 15:45:54.874552 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-tpv9d"] Feb 17 15:45:55.672228 master-0 kubenswrapper[26425]: I0217 15:45:55.672179 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:55.742667 master-0 kubenswrapper[26425]: I0217 15:45:55.742609 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shjkj\" (UniqueName: \"kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj\") pod \"cb161df9-6094-4ed3-8a36-06a828cd1674\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " Feb 17 15:45:55.743316 master-0 kubenswrapper[26425]: I0217 15:45:55.742761 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc\") pod \"cb161df9-6094-4ed3-8a36-06a828cd1674\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " Feb 17 15:45:55.743316 master-0 kubenswrapper[26425]: I0217 15:45:55.742825 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config\") pod \"cb161df9-6094-4ed3-8a36-06a828cd1674\" (UID: \"cb161df9-6094-4ed3-8a36-06a828cd1674\") " Feb 17 15:45:55.748590 master-0 kubenswrapper[26425]: I0217 15:45:55.748420 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj" (OuterVolumeSpecName: "kube-api-access-shjkj") pod "cb161df9-6094-4ed3-8a36-06a828cd1674" (UID: "cb161df9-6094-4ed3-8a36-06a828cd1674"). InnerVolumeSpecName "kube-api-access-shjkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:45:55.761708 master-0 kubenswrapper[26425]: I0217 15:45:55.760833 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" Feb 17 15:45:55.761708 master-0 kubenswrapper[26425]: I0217 15:45:55.761024 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-p9rp4" event={"ID":"cb161df9-6094-4ed3-8a36-06a828cd1674","Type":"ContainerDied","Data":"3658b8f77ad875aa95dcfa7ef732b93073a01cb0c003f02a862d0fa6f5e17832"} Feb 17 15:45:55.781307 master-0 kubenswrapper[26425]: I0217 15:45:55.781191 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb161df9-6094-4ed3-8a36-06a828cd1674" (UID: "cb161df9-6094-4ed3-8a36-06a828cd1674"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:45:55.786960 master-0 kubenswrapper[26425]: I0217 15:45:55.786919 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config" (OuterVolumeSpecName: "config") pod "cb161df9-6094-4ed3-8a36-06a828cd1674" (UID: "cb161df9-6094-4ed3-8a36-06a828cd1674"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:45:55.849061 master-0 kubenswrapper[26425]: I0217 15:45:55.846016 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shjkj\" (UniqueName: \"kubernetes.io/projected/cb161df9-6094-4ed3-8a36-06a828cd1674-kube-api-access-shjkj\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:55.849061 master-0 kubenswrapper[26425]: I0217 15:45:55.846065 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:55.849061 master-0 kubenswrapper[26425]: I0217 15:45:55.846075 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb161df9-6094-4ed3-8a36-06a828cd1674-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:45:56.159377 master-0 kubenswrapper[26425]: I0217 15:45:56.157007 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:56.173728 master-0 kubenswrapper[26425]: I0217 15:45:56.173632 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-p9rp4"] Feb 17 15:45:56.419497 master-0 kubenswrapper[26425]: I0217 15:45:56.419039 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1304b6d0-6de1-4f39-a55d-bf89c4b41d08" path="/var/lib/kubelet/pods/1304b6d0-6de1-4f39-a55d-bf89c4b41d08/volumes" Feb 17 15:45:56.420013 master-0 kubenswrapper[26425]: I0217 15:45:56.419982 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb161df9-6094-4ed3-8a36-06a828cd1674" path="/var/lib/kubelet/pods/cb161df9-6094-4ed3-8a36-06a828cd1674/volumes" Feb 17 15:45:59.359801 master-0 kubenswrapper[26425]: I0217 15:45:59.359557 26425 scope.go:117] "RemoveContainer" containerID="b7eb29e9a37a840b6e9864e570a4e4c375af6f660033f93bc488d5c364dd7d23" Feb 17 15:46:00.842008 master-0 kubenswrapper[26425]: I0217 15:46:00.841933 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e0b111ae-7c7d-499a-a124-c0e76e2603a6","Type":"ContainerStarted","Data":"cc4ab98b6a99abbf36eaed870e80bc6a2a54502a944a573aa8b45db24b5a1b61"} Feb 17 15:46:00.842977 master-0 kubenswrapper[26425]: I0217 15:46:00.842727 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 15:46:00.855491 master-0 kubenswrapper[26425]: I0217 15:46:00.848805 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac242660-f8e4-4dcd-a723-5dcfd0d861fb","Type":"ContainerStarted","Data":"09b81fc6a1e2a7965f1268b11616324d61bef649b49190c554d8792fa03dd008"} Feb 17 15:46:00.855491 master-0 kubenswrapper[26425]: I0217 15:46:00.854392 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6001360b-0db0-4c81-8226-352e7f623535","Type":"ContainerStarted","Data":"66d3075e9877c57df1344e60911339ae5fed2dd400cd84d90b19f0f7580a78e2"} Feb 17 15:46:00.859195 master-0 kubenswrapper[26425]: I0217 15:46:00.858369 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" event={"ID":"f7825929-3b0c-402f-9c91-3f6a0e438ea3","Type":"ContainerStarted","Data":"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811"} Feb 17 15:46:00.859195 master-0 kubenswrapper[26425]: I0217 15:46:00.858625 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:46:00.862528 master-0 kubenswrapper[26425]: I0217 15:46:00.861612 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"892a05fd-9a7a-44db-8b41-98e748414a9c","Type":"ContainerStarted","Data":"875e87497399c88d88fa6f31481c0dde70dc40be528a0ad73bef0c9c8a3450dd"} Feb 17 15:46:00.918943 master-0 kubenswrapper[26425]: I0217 15:46:00.918772 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.047760836 podStartE2EDuration="34.918750478s" podCreationTimestamp="2026-02-17 15:45:26 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.365784921 +0000 UTC m=+1812.257508779" lastFinishedPulling="2026-02-17 15:46:00.236774603 +0000 UTC m=+1822.128498421" observedRunningTime="2026-02-17 15:46:00.901866403 +0000 UTC m=+1822.793590241" watchObservedRunningTime="2026-02-17 15:46:00.918750478 +0000 UTC m=+1822.810474296" Feb 17 15:46:00.955487 master-0 kubenswrapper[26425]: I0217 15:46:00.953624 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" podStartSLOduration=10.86939281 podStartE2EDuration="39.953605244s" podCreationTimestamp="2026-02-17 15:45:21 +0000 UTC" firstStartedPulling="2026-02-17 15:45:22.506504603 +0000 UTC m=+1784.398228421" lastFinishedPulling="2026-02-17 15:45:51.590717037 +0000 UTC m=+1813.482440855" observedRunningTime="2026-02-17 15:46:00.943534973 +0000 UTC m=+1822.835258801" watchObservedRunningTime="2026-02-17 15:46:00.953605244 +0000 UTC m=+1822.845329062" Feb 17 15:46:01.873015 master-0 kubenswrapper[26425]: I0217 15:46:01.872966 26425 generic.go:334] "Generic (PLEG): container finished" podID="a673e0a2-e190-4228-8263-de2cdc13293c" containerID="8c1e9329f61f40b2200079fc62f74cec89f1a29dd1d615de4027b2441587d96e" exitCode=0 Feb 17 15:46:01.873546 master-0 kubenswrapper[26425]: I0217 15:46:01.873023 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fxgqd" event={"ID":"a673e0a2-e190-4228-8263-de2cdc13293c","Type":"ContainerDied","Data":"8c1e9329f61f40b2200079fc62f74cec89f1a29dd1d615de4027b2441587d96e"} Feb 17 15:46:01.876664 master-0 kubenswrapper[26425]: I0217 15:46:01.876484 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"046f897b-506a-4978-b9cd-07283f1e3057","Type":"ContainerStarted","Data":"d5332da1044f437c69f3f362d3c67556799d62415e1adc52db2b043a37849989"} Feb 17 15:46:01.879598 master-0 kubenswrapper[26425]: I0217 15:46:01.879132 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn" event={"ID":"b83eed22-dd59-4e1d-91c1-fed8bead5b05","Type":"ContainerStarted","Data":"6e5d92903e05c1fb94dffa47dba9ef164989f08e62bcd9011216075c4d8aef90"} Feb 17 15:46:01.880724 master-0 kubenswrapper[26425]: I0217 15:46:01.880034 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hdbmn" Feb 17 15:46:01.943935 master-0 kubenswrapper[26425]: I0217 15:46:01.943719 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hdbmn" podStartSLOduration=20.810387716 podStartE2EDuration="30.943695299s" podCreationTimestamp="2026-02-17 15:45:31 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.275014035 +0000 UTC m=+1812.166737853" lastFinishedPulling="2026-02-17 15:46:00.408321618 +0000 UTC m=+1822.300045436" observedRunningTime="2026-02-17 15:46:01.927697174 +0000 UTC m=+1823.819420992" watchObservedRunningTime="2026-02-17 15:46:01.943695299 +0000 UTC m=+1823.835419117" Feb 17 15:46:02.851717 master-0 kubenswrapper[26425]: I0217 15:46:02.850157 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:46:02.893619 master-0 kubenswrapper[26425]: I0217 15:46:02.893490 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5","Type":"ContainerStarted","Data":"7cce890fad9e79fd12c6cbd29d474658c922e25adc1283ca569da806108401c5"} Feb 17 15:46:02.899597 master-0 kubenswrapper[26425]: I0217 15:46:02.899553 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3dc68acf-40ce-41a7-8633-6f19a9382a89","Type":"ContainerStarted","Data":"54ce7ad2c8ab9ee395e10da368db9beb35aa7ff1a0454ba6e043f985f430aef7"} Feb 17 15:46:02.906893 master-0 kubenswrapper[26425]: I0217 15:46:02.906814 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fxgqd" event={"ID":"a673e0a2-e190-4228-8263-de2cdc13293c","Type":"ContainerStarted","Data":"4725cc1507d2d265c3003b9061e6a7b0f1025e2e07ce7aa9dccacccdb1d4b6e5"} Feb 17 15:46:03.002541 master-0 kubenswrapper[26425]: I0217 15:46:03.002406 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:46:03.002792 master-0 kubenswrapper[26425]: I0217 15:46:03.002638 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="dnsmasq-dns" containerID="cri-o://5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811" gracePeriod=10 Feb 17 15:46:03.500160 master-0 kubenswrapper[26425]: I0217 15:46:03.500120 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:46:03.626874 master-0 kubenswrapper[26425]: I0217 15:46:03.626820 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc\") pod \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " Feb 17 15:46:03.626874 master-0 kubenswrapper[26425]: I0217 15:46:03.626876 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qhm\" (UniqueName: \"kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm\") pod \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " Feb 17 15:46:03.627119 master-0 kubenswrapper[26425]: I0217 15:46:03.626978 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config\") pod \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\" (UID: \"f7825929-3b0c-402f-9c91-3f6a0e438ea3\") " Feb 17 15:46:03.630022 master-0 kubenswrapper[26425]: I0217 15:46:03.629982 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm" (OuterVolumeSpecName: "kube-api-access-n7qhm") pod "f7825929-3b0c-402f-9c91-3f6a0e438ea3" (UID: "f7825929-3b0c-402f-9c91-3f6a0e438ea3"). InnerVolumeSpecName "kube-api-access-n7qhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:03.671737 master-0 kubenswrapper[26425]: I0217 15:46:03.671678 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config" (OuterVolumeSpecName: "config") pod "f7825929-3b0c-402f-9c91-3f6a0e438ea3" (UID: "f7825929-3b0c-402f-9c91-3f6a0e438ea3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:03.674182 master-0 kubenswrapper[26425]: I0217 15:46:03.674140 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7825929-3b0c-402f-9c91-3f6a0e438ea3" (UID: "f7825929-3b0c-402f-9c91-3f6a0e438ea3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:03.728863 master-0 kubenswrapper[26425]: I0217 15:46:03.728798 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:03.728863 master-0 kubenswrapper[26425]: I0217 15:46:03.728850 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qhm\" (UniqueName: \"kubernetes.io/projected/f7825929-3b0c-402f-9c91-3f6a0e438ea3-kube-api-access-n7qhm\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:03.728863 master-0 kubenswrapper[26425]: I0217 15:46:03.728862 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7825929-3b0c-402f-9c91-3f6a0e438ea3-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:03.922448 master-0 kubenswrapper[26425]: I0217 15:46:03.922391 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6001360b-0db0-4c81-8226-352e7f623535","Type":"ContainerStarted","Data":"575775a2fa65ed4bf873201fd3da5a474fcc46b8cd349bf73407b2f8699aef23"} Feb 17 15:46:03.924650 master-0 kubenswrapper[26425]: I0217 15:46:03.924593 26425 generic.go:334] "Generic (PLEG): container finished" podID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerID="5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811" exitCode=0 Feb 17 15:46:03.924650 master-0 kubenswrapper[26425]: I0217 15:46:03.924637 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" Feb 17 15:46:03.924844 master-0 kubenswrapper[26425]: I0217 15:46:03.924658 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" event={"ID":"f7825929-3b0c-402f-9c91-3f6a0e438ea3","Type":"ContainerDied","Data":"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811"} Feb 17 15:46:03.924844 master-0 kubenswrapper[26425]: I0217 15:46:03.924761 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b66f9649-znfnp" event={"ID":"f7825929-3b0c-402f-9c91-3f6a0e438ea3","Type":"ContainerDied","Data":"832370e9734bf6636854f1e3a08dc66cca152b5982369bd1e35861ce6231079e"} Feb 17 15:46:03.924844 master-0 kubenswrapper[26425]: I0217 15:46:03.924797 26425 scope.go:117] "RemoveContainer" containerID="5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811" Feb 17 15:46:03.927855 master-0 kubenswrapper[26425]: I0217 15:46:03.927537 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"892a05fd-9a7a-44db-8b41-98e748414a9c","Type":"ContainerStarted","Data":"843d53381309317d5dbc5b7b1e1fda843b659bb47ea2012b124a7ce94c48b468"} Feb 17 15:46:03.930894 master-0 kubenswrapper[26425]: I0217 15:46:03.930782 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fxgqd" event={"ID":"a673e0a2-e190-4228-8263-de2cdc13293c","Type":"ContainerStarted","Data":"4f4cf4ce28b90c88dce1098998026a7a149daa1a316d15619222d8e78467d78f"} Feb 17 15:46:03.931244 master-0 kubenswrapper[26425]: I0217 15:46:03.931164 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:46:03.959729 master-0 kubenswrapper[26425]: I0217 15:46:03.956273 26425 scope.go:117] "RemoveContainer" containerID="1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593" Feb 17 15:46:04.005829 master-0 kubenswrapper[26425]: I0217 15:46:04.005782 26425 scope.go:117] "RemoveContainer" containerID="5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811" Feb 17 15:46:04.006292 master-0 kubenswrapper[26425]: E0217 15:46:04.006255 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811\": container with ID starting with 5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811 not found: ID does not exist" containerID="5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811" Feb 17 15:46:04.006353 master-0 kubenswrapper[26425]: I0217 15:46:04.006295 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811"} err="failed to get container status \"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811\": rpc error: code = NotFound desc = could not find container \"5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811\": container with ID starting with 5b4158816377622c08d10a7e27680ea3c454016b9b964c8c5cb5d09992f29811 not found: ID does not exist" Feb 17 15:46:04.006353 master-0 kubenswrapper[26425]: I0217 15:46:04.006316 26425 scope.go:117] "RemoveContainer" containerID="1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593" Feb 17 15:46:04.006659 master-0 kubenswrapper[26425]: E0217 15:46:04.006629 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593\": container with ID starting with 1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593 not found: ID does not exist" containerID="1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593" Feb 17 15:46:04.006724 master-0 kubenswrapper[26425]: I0217 15:46:04.006656 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593"} err="failed to get container status \"1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593\": rpc error: code = NotFound desc = could not find container \"1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593\": container with ID starting with 1f3de48b1fe17f8ecc39b85267aa37cc13f911d9de549a3740e81bfc8e6ec593 not found: ID does not exist" Feb 17 15:46:04.064885 master-0 kubenswrapper[26425]: I0217 15:46:04.064078 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.637446541 podStartE2EDuration="29.064054308s" podCreationTimestamp="2026-02-17 15:45:35 +0000 UTC" firstStartedPulling="2026-02-17 15:45:52.243907922 +0000 UTC m=+1814.135631740" lastFinishedPulling="2026-02-17 15:46:02.670515689 +0000 UTC m=+1824.562239507" observedRunningTime="2026-02-17 15:46:04.055865601 +0000 UTC m=+1825.947589429" watchObservedRunningTime="2026-02-17 15:46:04.064054308 +0000 UTC m=+1825.955778136" Feb 17 15:46:04.375570 master-0 kubenswrapper[26425]: I0217 15:46:04.375484 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.374358452 podStartE2EDuration="33.375443895s" podCreationTimestamp="2026-02-17 15:45:31 +0000 UTC" firstStartedPulling="2026-02-17 15:45:51.664264542 +0000 UTC m=+1813.555988360" lastFinishedPulling="2026-02-17 15:46:02.665349985 +0000 UTC m=+1824.557073803" observedRunningTime="2026-02-17 15:46:04.362954435 +0000 UTC m=+1826.254678283" watchObservedRunningTime="2026-02-17 15:46:04.375443895 +0000 UTC m=+1826.267167713" Feb 17 15:46:04.555418 master-0 kubenswrapper[26425]: I0217 15:46:04.555331 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fxgqd" podStartSLOduration=24.96584373 podStartE2EDuration="33.555309799s" podCreationTimestamp="2026-02-17 15:45:31 +0000 UTC" firstStartedPulling="2026-02-17 15:45:51.817554857 +0000 UTC m=+1813.709278675" lastFinishedPulling="2026-02-17 15:46:00.407020916 +0000 UTC m=+1822.298744744" observedRunningTime="2026-02-17 15:46:04.542132282 +0000 UTC m=+1826.433856130" watchObservedRunningTime="2026-02-17 15:46:04.555309799 +0000 UTC m=+1826.447033607" Feb 17 15:46:04.942599 master-0 kubenswrapper[26425]: I0217 15:46:04.942531 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:46:04.998197 master-0 kubenswrapper[26425]: I0217 15:46:04.998094 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:46:05.078730 master-0 kubenswrapper[26425]: I0217 15:46:05.078658 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b66f9649-znfnp"] Feb 17 15:46:05.630793 master-0 kubenswrapper[26425]: I0217 15:46:05.630732 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 15:46:05.631117 master-0 kubenswrapper[26425]: I0217 15:46:05.631098 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 15:46:05.680951 master-0 kubenswrapper[26425]: I0217 15:46:05.680887 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 15:46:05.828130 master-0 kubenswrapper[26425]: I0217 15:46:05.828056 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 15:46:05.828373 master-0 kubenswrapper[26425]: I0217 15:46:05.828158 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 15:46:05.865757 master-0 kubenswrapper[26425]: I0217 15:46:05.865688 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 15:46:06.017567 master-0 kubenswrapper[26425]: I0217 15:46:06.017492 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 15:46:06.040695 master-0 kubenswrapper[26425]: I0217 15:46:06.040598 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 15:46:06.415851 master-0 kubenswrapper[26425]: I0217 15:46:06.415741 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" path="/var/lib/kubelet/pods/f7825929-3b0c-402f-9c91-3f6a0e438ea3/volumes" Feb 17 15:46:06.858702 master-0 kubenswrapper[26425]: I0217 15:46:06.858526 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 15:46:06.977171 master-0 kubenswrapper[26425]: I0217 15:46:06.976556 26425 generic.go:334] "Generic (PLEG): container finished" podID="046f897b-506a-4978-b9cd-07283f1e3057" containerID="d5332da1044f437c69f3f362d3c67556799d62415e1adc52db2b043a37849989" exitCode=0 Feb 17 15:46:06.977171 master-0 kubenswrapper[26425]: I0217 15:46:06.976737 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"046f897b-506a-4978-b9cd-07283f1e3057","Type":"ContainerDied","Data":"d5332da1044f437c69f3f362d3c67556799d62415e1adc52db2b043a37849989"} Feb 17 15:46:06.982942 master-0 kubenswrapper[26425]: I0217 15:46:06.982893 26425 generic.go:334] "Generic (PLEG): container finished" podID="ac242660-f8e4-4dcd-a723-5dcfd0d861fb" containerID="09b81fc6a1e2a7965f1268b11616324d61bef649b49190c554d8792fa03dd008" exitCode=0 Feb 17 15:46:06.983207 master-0 kubenswrapper[26425]: I0217 15:46:06.983158 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac242660-f8e4-4dcd-a723-5dcfd0d861fb","Type":"ContainerDied","Data":"09b81fc6a1e2a7965f1268b11616324d61bef649b49190c554d8792fa03dd008"} Feb 17 15:46:07.993914 master-0 kubenswrapper[26425]: I0217 15:46:07.993840 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"046f897b-506a-4978-b9cd-07283f1e3057","Type":"ContainerStarted","Data":"cec801e850f3508a81c318ae1780fff14db793b865633b891f0a27a8ac708cb0"} Feb 17 15:46:07.996775 master-0 kubenswrapper[26425]: I0217 15:46:07.996736 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac242660-f8e4-4dcd-a723-5dcfd0d861fb","Type":"ContainerStarted","Data":"0b09f167f7d36ae566c4389adcce330b0c1ed84dc36fed97c26af7d0361463fc"} Feb 17 15:46:09.021507 master-0 kubenswrapper[26425]: I0217 15:46:09.018392 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=33.838725087 podStartE2EDuration="44.01835822s" podCreationTimestamp="2026-02-17 15:45:25 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.269442322 +0000 UTC m=+1812.161166140" lastFinishedPulling="2026-02-17 15:46:00.449075455 +0000 UTC m=+1822.340799273" observedRunningTime="2026-02-17 15:46:09.011747952 +0000 UTC m=+1830.903471800" watchObservedRunningTime="2026-02-17 15:46:09.01835822 +0000 UTC m=+1830.910082028" Feb 17 15:46:09.933112 master-0 kubenswrapper[26425]: I0217 15:46:09.933011 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=36.809571419 podStartE2EDuration="46.932994945s" podCreationTimestamp="2026-02-17 15:45:23 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.283694873 +0000 UTC m=+1812.175418691" lastFinishedPulling="2026-02-17 15:46:00.407118399 +0000 UTC m=+1822.298842217" observedRunningTime="2026-02-17 15:46:09.917261046 +0000 UTC m=+1831.808984954" watchObservedRunningTime="2026-02-17 15:46:09.932994945 +0000 UTC m=+1831.824718763" Feb 17 15:46:10.353328 master-0 kubenswrapper[26425]: I0217 15:46:10.353177 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 15:46:10.353328 master-0 kubenswrapper[26425]: I0217 15:46:10.353263 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 15:46:10.876336 master-0 kubenswrapper[26425]: I0217 15:46:10.876262 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:10.877018 master-0 kubenswrapper[26425]: E0217 15:46:10.876995 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="init" Feb 17 15:46:10.877018 master-0 kubenswrapper[26425]: I0217 15:46:10.877016 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="init" Feb 17 15:46:10.877116 master-0 kubenswrapper[26425]: E0217 15:46:10.877048 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304b6d0-6de1-4f39-a55d-bf89c4b41d08" containerName="init" Feb 17 15:46:10.877116 master-0 kubenswrapper[26425]: I0217 15:46:10.877054 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304b6d0-6de1-4f39-a55d-bf89c4b41d08" containerName="init" Feb 17 15:46:10.877116 master-0 kubenswrapper[26425]: E0217 15:46:10.877061 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="dnsmasq-dns" Feb 17 15:46:10.877116 master-0 kubenswrapper[26425]: I0217 15:46:10.877068 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="dnsmasq-dns" Feb 17 15:46:10.877266 master-0 kubenswrapper[26425]: E0217 15:46:10.877127 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb161df9-6094-4ed3-8a36-06a828cd1674" containerName="init" Feb 17 15:46:10.877266 master-0 kubenswrapper[26425]: I0217 15:46:10.877136 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb161df9-6094-4ed3-8a36-06a828cd1674" containerName="init" Feb 17 15:46:10.877475 master-0 kubenswrapper[26425]: I0217 15:46:10.877333 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7825929-3b0c-402f-9c91-3f6a0e438ea3" containerName="dnsmasq-dns" Feb 17 15:46:10.877475 master-0 kubenswrapper[26425]: I0217 15:46:10.877355 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb161df9-6094-4ed3-8a36-06a828cd1674" containerName="init" Feb 17 15:46:10.877475 master-0 kubenswrapper[26425]: I0217 15:46:10.877374 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1304b6d0-6de1-4f39-a55d-bf89c4b41d08" containerName="init" Feb 17 15:46:10.878380 master-0 kubenswrapper[26425]: I0217 15:46:10.878361 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:10.880865 master-0 kubenswrapper[26425]: I0217 15:46:10.880814 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 15:46:11.012092 master-0 kubenswrapper[26425]: I0217 15:46:11.003698 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkdb\" (UniqueName: \"kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.012092 master-0 kubenswrapper[26425]: I0217 15:46:11.003763 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.012092 master-0 kubenswrapper[26425]: I0217 15:46:11.003883 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.012092 master-0 kubenswrapper[26425]: I0217 15:46:11.003912 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.108006 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.108102 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.108197 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkdb\" (UniqueName: \"kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.108227 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.109335 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.109419 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.109898 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.114481 master-0 kubenswrapper[26425]: I0217 15:46:11.111106 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:11.189254 master-0 kubenswrapper[26425]: I0217 15:46:11.188547 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkdb\" (UniqueName: \"kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb\") pod \"dnsmasq-dns-6fd854f54c-g52n4\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.200069 master-0 kubenswrapper[26425]: I0217 15:46:11.200006 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wwqh5"] Feb 17 15:46:11.201023 master-0 kubenswrapper[26425]: I0217 15:46:11.200964 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:11.202265 master-0 kubenswrapper[26425]: I0217 15:46:11.202068 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.207546 master-0 kubenswrapper[26425]: I0217 15:46:11.207498 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 15:46:11.217969 master-0 kubenswrapper[26425]: I0217 15:46:11.217921 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwqh5"] Feb 17 15:46:11.319081 master-0 kubenswrapper[26425]: I0217 15:46:11.319027 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.319495 master-0 kubenswrapper[26425]: I0217 15:46:11.319474 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovn-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.319599 master-0 kubenswrapper[26425]: I0217 15:46:11.319582 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.319723 master-0 kubenswrapper[26425]: I0217 15:46:11.319709 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bab4403-adca-4543-9679-0f1c19029d03-config\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.319835 master-0 kubenswrapper[26425]: I0217 15:46:11.319818 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwh7r\" (UniqueName: \"kubernetes.io/projected/0bab4403-adca-4543-9679-0f1c19029d03-kube-api-access-xwh7r\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.319934 master-0 kubenswrapper[26425]: I0217 15:46:11.319919 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovs-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.388078 master-0 kubenswrapper[26425]: I0217 15:46:11.388011 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 15:46:11.388078 master-0 kubenswrapper[26425]: I0217 15:46:11.388070 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 15:46:11.422761 master-0 kubenswrapper[26425]: I0217 15:46:11.422693 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bab4403-adca-4543-9679-0f1c19029d03-config\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.422993 master-0 kubenswrapper[26425]: I0217 15:46:11.422775 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwh7r\" (UniqueName: \"kubernetes.io/projected/0bab4403-adca-4543-9679-0f1c19029d03-kube-api-access-xwh7r\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423063 master-0 kubenswrapper[26425]: I0217 15:46:11.423013 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovs-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423411 master-0 kubenswrapper[26425]: I0217 15:46:11.423370 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423493 master-0 kubenswrapper[26425]: I0217 15:46:11.423434 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovs-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423586 master-0 kubenswrapper[26425]: I0217 15:46:11.423547 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovn-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423586 master-0 kubenswrapper[26425]: I0217 15:46:11.423578 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423695 master-0 kubenswrapper[26425]: I0217 15:46:11.423657 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bab4403-adca-4543-9679-0f1c19029d03-config\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.423774 master-0 kubenswrapper[26425]: I0217 15:46:11.423750 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0bab4403-adca-4543-9679-0f1c19029d03-ovn-rundir\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.426765 master-0 kubenswrapper[26425]: I0217 15:46:11.426674 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.432676 master-0 kubenswrapper[26425]: I0217 15:46:11.432628 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bab4403-adca-4543-9679-0f1c19029d03-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.723477 master-0 kubenswrapper[26425]: I0217 15:46:11.723405 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwh7r\" (UniqueName: \"kubernetes.io/projected/0bab4403-adca-4543-9679-0f1c19029d03-kube-api-access-xwh7r\") pod \"ovn-controller-metrics-wwqh5\" (UID: \"0bab4403-adca-4543-9679-0f1c19029d03\") " pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:11.907809 master-0 kubenswrapper[26425]: I0217 15:46:11.907670 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:11.925179 master-0 kubenswrapper[26425]: I0217 15:46:11.925131 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwqh5" Feb 17 15:46:12.056482 master-0 kubenswrapper[26425]: I0217 15:46:12.056258 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 15:46:12.063884 master-0 kubenswrapper[26425]: I0217 15:46:12.059497 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 15:46:12.063884 master-0 kubenswrapper[26425]: I0217 15:46:12.062273 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 15:46:12.063884 master-0 kubenswrapper[26425]: I0217 15:46:12.062430 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 15:46:12.063884 master-0 kubenswrapper[26425]: I0217 15:46:12.062568 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 15:46:12.113377 master-0 kubenswrapper[26425]: I0217 15:46:12.110119 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 15:46:12.125794 master-0 kubenswrapper[26425]: I0217 15:46:12.119082 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" event={"ID":"44357786-5699-463c-95e7-f84257834fa0","Type":"ContainerStarted","Data":"314c810e6cadd8fbc7403f9be0eb1bdbd9ea43bd612bb15692384c3556e6b32c"} Feb 17 15:46:12.167561 master-0 kubenswrapper[26425]: I0217 15:46:12.164550 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:12.167561 master-0 kubenswrapper[26425]: I0217 15:46:12.165087 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.167561 master-0 kubenswrapper[26425]: I0217 15:46:12.165135 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/753fb81c-8966-4b71-b31c-95deeed46228-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.167561 master-0 kubenswrapper[26425]: I0217 15:46:12.165163 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.170593 master-0 kubenswrapper[26425]: I0217 15:46:12.170235 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.170593 master-0 kubenswrapper[26425]: I0217 15:46:12.170314 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trtd\" (UniqueName: \"kubernetes.io/projected/753fb81c-8966-4b71-b31c-95deeed46228-kube-api-access-8trtd\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.170593 master-0 kubenswrapper[26425]: I0217 15:46:12.170409 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-scripts\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.170593 master-0 kubenswrapper[26425]: I0217 15:46:12.170485 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-config\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.210129 master-0 kubenswrapper[26425]: I0217 15:46:12.210036 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:46:12.212336 master-0 kubenswrapper[26425]: I0217 15:46:12.212269 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.215304 master-0 kubenswrapper[26425]: I0217 15:46:12.215248 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 15:46:12.255672 master-0 kubenswrapper[26425]: I0217 15:46:12.253237 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:46:12.278500 master-0 kubenswrapper[26425]: I0217 15:46:12.278449 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279191 master-0 kubenswrapper[26425]: I0217 15:46:12.279154 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8zr4\" (UniqueName: \"kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.279266 master-0 kubenswrapper[26425]: I0217 15:46:12.279242 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279266 master-0 kubenswrapper[26425]: I0217 15:46:12.279263 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trtd\" (UniqueName: \"kubernetes.io/projected/753fb81c-8966-4b71-b31c-95deeed46228-kube-api-access-8trtd\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279359 master-0 kubenswrapper[26425]: I0217 15:46:12.279296 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.279359 master-0 kubenswrapper[26425]: I0217 15:46:12.279328 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-scripts\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279359 master-0 kubenswrapper[26425]: I0217 15:46:12.279356 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-config\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279643 master-0 kubenswrapper[26425]: I0217 15:46:12.279382 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.279643 master-0 kubenswrapper[26425]: I0217 15:46:12.279424 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.279643 master-0 kubenswrapper[26425]: I0217 15:46:12.279519 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.279643 master-0 kubenswrapper[26425]: I0217 15:46:12.279549 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.279643 master-0 kubenswrapper[26425]: I0217 15:46:12.279567 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/753fb81c-8966-4b71-b31c-95deeed46228-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.284835 master-0 kubenswrapper[26425]: I0217 15:46:12.280685 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/753fb81c-8966-4b71-b31c-95deeed46228-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.284835 master-0 kubenswrapper[26425]: I0217 15:46:12.281547 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-scripts\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.284835 master-0 kubenswrapper[26425]: I0217 15:46:12.282315 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753fb81c-8966-4b71-b31c-95deeed46228-config\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.284835 master-0 kubenswrapper[26425]: I0217 15:46:12.283755 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.292668 master-0 kubenswrapper[26425]: I0217 15:46:12.290481 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.299622 master-0 kubenswrapper[26425]: I0217 15:46:12.299553 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753fb81c-8966-4b71-b31c-95deeed46228-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.308736 master-0 kubenswrapper[26425]: I0217 15:46:12.308399 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trtd\" (UniqueName: \"kubernetes.io/projected/753fb81c-8966-4b71-b31c-95deeed46228-kube-api-access-8trtd\") pod \"ovn-northd-0\" (UID: \"753fb81c-8966-4b71-b31c-95deeed46228\") " pod="openstack/ovn-northd-0" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.385752 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.386047 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.386117 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.386177 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8zr4\" (UniqueName: \"kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.386233 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.386866 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.387078 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.387631 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.388512 master-0 kubenswrapper[26425]: I0217 15:46:12.387904 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.406294 master-0 kubenswrapper[26425]: I0217 15:46:12.406238 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8zr4\" (UniqueName: \"kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4\") pod \"dnsmasq-dns-6fd49994df-55jsp\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.490918 master-0 kubenswrapper[26425]: I0217 15:46:12.490682 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 15:46:12.590384 master-0 kubenswrapper[26425]: I0217 15:46:12.587962 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:12.599752 master-0 kubenswrapper[26425]: I0217 15:46:12.599682 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwqh5"] Feb 17 15:46:12.614339 master-0 kubenswrapper[26425]: W0217 15:46:12.614254 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bab4403_adca_4543_9679_0f1c19029d03.slice/crio-9549dc075673ddc93a19870177e06ffc097be3948236861ad92e53fcdd9d4e51 WatchSource:0}: Error finding container 9549dc075673ddc93a19870177e06ffc097be3948236861ad92e53fcdd9d4e51: Status 404 returned error can't find the container with id 9549dc075673ddc93a19870177e06ffc097be3948236861ad92e53fcdd9d4e51 Feb 17 15:46:12.925617 master-0 kubenswrapper[26425]: E0217 15:46:12.923871 26425 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:52364->192.168.32.10:46723: write tcp 192.168.32.10:52364->192.168.32.10:46723: write: broken pipe Feb 17 15:46:13.018410 master-0 kubenswrapper[26425]: I0217 15:46:13.018356 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 15:46:13.125291 master-0 kubenswrapper[26425]: I0217 15:46:13.125191 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwqh5" event={"ID":"0bab4403-adca-4543-9679-0f1c19029d03","Type":"ContainerStarted","Data":"7e576566a865e271e17eee035b07791c7d83ffc4a2f30fc1869dc6aeefe43418"} Feb 17 15:46:13.125291 master-0 kubenswrapper[26425]: I0217 15:46:13.125284 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwqh5" event={"ID":"0bab4403-adca-4543-9679-0f1c19029d03","Type":"ContainerStarted","Data":"9549dc075673ddc93a19870177e06ffc097be3948236861ad92e53fcdd9d4e51"} Feb 17 15:46:13.127352 master-0 kubenswrapper[26425]: I0217 15:46:13.127311 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"753fb81c-8966-4b71-b31c-95deeed46228","Type":"ContainerStarted","Data":"07f35671e3fbabc38e82b28e33a9642d07b0e64968928a23e11b3ed9a1bf6746"} Feb 17 15:46:13.131389 master-0 kubenswrapper[26425]: I0217 15:46:13.128798 26425 generic.go:334] "Generic (PLEG): container finished" podID="44357786-5699-463c-95e7-f84257834fa0" containerID="7eae4e6c052a830604f3d18569c2b2035d71f2903f6ac4ff635ee69182d1a889" exitCode=0 Feb 17 15:46:13.131389 master-0 kubenswrapper[26425]: I0217 15:46:13.128836 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" event={"ID":"44357786-5699-463c-95e7-f84257834fa0","Type":"ContainerDied","Data":"7eae4e6c052a830604f3d18569c2b2035d71f2903f6ac4ff635ee69182d1a889"} Feb 17 15:46:13.169109 master-0 kubenswrapper[26425]: I0217 15:46:13.168984 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wwqh5" podStartSLOduration=3.168955898 podStartE2EDuration="3.168955898s" podCreationTimestamp="2026-02-17 15:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:13.153025755 +0000 UTC m=+1835.044749603" watchObservedRunningTime="2026-02-17 15:46:13.168955898 +0000 UTC m=+1835.060679736" Feb 17 15:46:13.235185 master-0 kubenswrapper[26425]: I0217 15:46:13.235110 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:46:13.607006 master-0 kubenswrapper[26425]: I0217 15:46:13.606956 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:13.728766 master-0 kubenswrapper[26425]: I0217 15:46:13.728611 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb\") pod \"44357786-5699-463c-95e7-f84257834fa0\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " Feb 17 15:46:13.728947 master-0 kubenswrapper[26425]: I0217 15:46:13.728865 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdkdb\" (UniqueName: \"kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb\") pod \"44357786-5699-463c-95e7-f84257834fa0\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " Feb 17 15:46:13.728947 master-0 kubenswrapper[26425]: I0217 15:46:13.728905 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config\") pod \"44357786-5699-463c-95e7-f84257834fa0\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " Feb 17 15:46:13.729051 master-0 kubenswrapper[26425]: I0217 15:46:13.728966 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc\") pod \"44357786-5699-463c-95e7-f84257834fa0\" (UID: \"44357786-5699-463c-95e7-f84257834fa0\") " Feb 17 15:46:13.739824 master-0 kubenswrapper[26425]: I0217 15:46:13.739742 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb" (OuterVolumeSpecName: "kube-api-access-sdkdb") pod "44357786-5699-463c-95e7-f84257834fa0" (UID: "44357786-5699-463c-95e7-f84257834fa0"). InnerVolumeSpecName "kube-api-access-sdkdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:13.756776 master-0 kubenswrapper[26425]: I0217 15:46:13.756695 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44357786-5699-463c-95e7-f84257834fa0" (UID: "44357786-5699-463c-95e7-f84257834fa0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:13.763895 master-0 kubenswrapper[26425]: I0217 15:46:13.763817 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44357786-5699-463c-95e7-f84257834fa0" (UID: "44357786-5699-463c-95e7-f84257834fa0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:13.770019 master-0 kubenswrapper[26425]: I0217 15:46:13.769931 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config" (OuterVolumeSpecName: "config") pod "44357786-5699-463c-95e7-f84257834fa0" (UID: "44357786-5699-463c-95e7-f84257834fa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:13.831280 master-0 kubenswrapper[26425]: I0217 15:46:13.831219 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdkdb\" (UniqueName: \"kubernetes.io/projected/44357786-5699-463c-95e7-f84257834fa0-kube-api-access-sdkdb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:13.831280 master-0 kubenswrapper[26425]: I0217 15:46:13.831268 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:13.831280 master-0 kubenswrapper[26425]: I0217 15:46:13.831278 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:13.831280 master-0 kubenswrapper[26425]: I0217 15:46:13.831287 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44357786-5699-463c-95e7-f84257834fa0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:14.051410 master-0 kubenswrapper[26425]: I0217 15:46:14.050984 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 15:46:14.051649 master-0 kubenswrapper[26425]: E0217 15:46:14.051439 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44357786-5699-463c-95e7-f84257834fa0" containerName="init" Feb 17 15:46:14.051649 master-0 kubenswrapper[26425]: I0217 15:46:14.051453 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="44357786-5699-463c-95e7-f84257834fa0" containerName="init" Feb 17 15:46:14.051745 master-0 kubenswrapper[26425]: I0217 15:46:14.051711 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="44357786-5699-463c-95e7-f84257834fa0" containerName="init" Feb 17 15:46:14.058130 master-0 kubenswrapper[26425]: I0217 15:46:14.058071 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 15:46:14.060196 master-0 kubenswrapper[26425]: I0217 15:46:14.060140 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 15:46:14.060904 master-0 kubenswrapper[26425]: I0217 15:46:14.060866 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 15:46:14.061170 master-0 kubenswrapper[26425]: I0217 15:46:14.061148 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 15:46:14.075055 master-0 kubenswrapper[26425]: I0217 15:46:14.074927 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 15:46:14.141900 master-0 kubenswrapper[26425]: I0217 15:46:14.141832 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvpts\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-kube-api-access-lvpts\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.142100 master-0 kubenswrapper[26425]: I0217 15:46:14.141915 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-cache\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.142100 master-0 kubenswrapper[26425]: I0217 15:46:14.142081 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa702-781f-4cf7-88c9-3ff414265810-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.142202 master-0 kubenswrapper[26425]: I0217 15:46:14.142119 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58883985-d49a-4529-bbce-ec8f3e112255\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0f5ab871-0d5e-4f70-8b0e-971c039dae4d\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.142202 master-0 kubenswrapper[26425]: I0217 15:46:14.142146 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-lock\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.142202 master-0 kubenswrapper[26425]: I0217 15:46:14.142167 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.143109 master-0 kubenswrapper[26425]: I0217 15:46:14.143065 26425 generic.go:334] "Generic (PLEG): container finished" podID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerID="089ed7ad769beba95ba699f2afa1f85857193ed8e315d81ed338197ff5300062" exitCode=0 Feb 17 15:46:14.143188 master-0 kubenswrapper[26425]: I0217 15:46:14.143130 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" event={"ID":"3519978a-5c7a-4466-9ad6-5750be0683e2","Type":"ContainerDied","Data":"089ed7ad769beba95ba699f2afa1f85857193ed8e315d81ed338197ff5300062"} Feb 17 15:46:14.143188 master-0 kubenswrapper[26425]: I0217 15:46:14.143161 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" event={"ID":"3519978a-5c7a-4466-9ad6-5750be0683e2","Type":"ContainerStarted","Data":"3e2a990f071946ced6ab02f916d710024beb43ad13c5ddcdcf6b66a10e5ed526"} Feb 17 15:46:14.146368 master-0 kubenswrapper[26425]: I0217 15:46:14.146322 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" Feb 17 15:46:14.150032 master-0 kubenswrapper[26425]: I0217 15:46:14.147833 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd854f54c-g52n4" event={"ID":"44357786-5699-463c-95e7-f84257834fa0","Type":"ContainerDied","Data":"314c810e6cadd8fbc7403f9be0eb1bdbd9ea43bd612bb15692384c3556e6b32c"} Feb 17 15:46:14.150032 master-0 kubenswrapper[26425]: I0217 15:46:14.147920 26425 scope.go:117] "RemoveContainer" containerID="7eae4e6c052a830604f3d18569c2b2035d71f2903f6ac4ff635ee69182d1a889" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246377 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa702-781f-4cf7-88c9-3ff414265810-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246489 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58883985-d49a-4529-bbce-ec8f3e112255\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0f5ab871-0d5e-4f70-8b0e-971c039dae4d\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246594 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-lock\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246624 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246713 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvpts\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-kube-api-access-lvpts\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.248569 master-0 kubenswrapper[26425]: I0217 15:46:14.246778 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-cache\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.251598 master-0 kubenswrapper[26425]: I0217 15:46:14.251292 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-cache\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.251598 master-0 kubenswrapper[26425]: I0217 15:46:14.251413 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa702-781f-4cf7-88c9-3ff414265810-lock\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.251598 master-0 kubenswrapper[26425]: E0217 15:46:14.249988 26425 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 15:46:14.251598 master-0 kubenswrapper[26425]: E0217 15:46:14.251534 26425 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 15:46:14.251598 master-0 kubenswrapper[26425]: E0217 15:46:14.251583 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift podName:0a9aa702-781f-4cf7-88c9-3ff414265810 nodeName:}" failed. No retries permitted until 2026-02-17 15:46:14.75156659 +0000 UTC m=+1836.643290398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift") pod "swift-storage-0" (UID: "0a9aa702-781f-4cf7-88c9-3ff414265810") : configmap "swift-ring-files" not found Feb 17 15:46:14.252144 master-0 kubenswrapper[26425]: I0217 15:46:14.252092 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:46:14.252207 master-0 kubenswrapper[26425]: I0217 15:46:14.252146 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58883985-d49a-4529-bbce-ec8f3e112255\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0f5ab871-0d5e-4f70-8b0e-971c039dae4d\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a9930a29eb7f28eae4751001c70b2404e9c2637b3a7e458958fca0928fad1f90/globalmount\"" pod="openstack/swift-storage-0" Feb 17 15:46:14.255134 master-0 kubenswrapper[26425]: I0217 15:46:14.255088 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa702-781f-4cf7-88c9-3ff414265810-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.274655 master-0 kubenswrapper[26425]: I0217 15:46:14.272229 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvpts\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-kube-api-access-lvpts\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.444742 master-0 kubenswrapper[26425]: I0217 15:46:14.444013 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:14.455574 master-0 kubenswrapper[26425]: I0217 15:46:14.455172 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd854f54c-g52n4"] Feb 17 15:46:14.771122 master-0 kubenswrapper[26425]: I0217 15:46:14.771018 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:14.771736 master-0 kubenswrapper[26425]: E0217 15:46:14.771394 26425 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 15:46:14.771736 master-0 kubenswrapper[26425]: E0217 15:46:14.771430 26425 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 15:46:14.771836 master-0 kubenswrapper[26425]: E0217 15:46:14.771737 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift podName:0a9aa702-781f-4cf7-88c9-3ff414265810 nodeName:}" failed. No retries permitted until 2026-02-17 15:46:15.771711894 +0000 UTC m=+1837.663435772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift") pod "swift-storage-0" (UID: "0a9aa702-781f-4cf7-88c9-3ff414265810") : configmap "swift-ring-files" not found Feb 17 15:46:14.906599 master-0 kubenswrapper[26425]: I0217 15:46:14.906536 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4xb95"] Feb 17 15:46:14.908427 master-0 kubenswrapper[26425]: I0217 15:46:14.908399 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.910839 master-0 kubenswrapper[26425]: I0217 15:46:14.910401 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 15:46:14.910839 master-0 kubenswrapper[26425]: I0217 15:46:14.910614 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 15:46:14.910839 master-0 kubenswrapper[26425]: I0217 15:46:14.910720 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 15:46:14.920237 master-0 kubenswrapper[26425]: I0217 15:46:14.919990 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4xb95"] Feb 17 15:46:14.975237 master-0 kubenswrapper[26425]: I0217 15:46:14.975177 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975237 master-0 kubenswrapper[26425]: I0217 15:46:14.975238 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7dx\" (UniqueName: \"kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975503 master-0 kubenswrapper[26425]: I0217 15:46:14.975382 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975503 master-0 kubenswrapper[26425]: I0217 15:46:14.975439 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975599 master-0 kubenswrapper[26425]: I0217 15:46:14.975571 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975642 master-0 kubenswrapper[26425]: I0217 15:46:14.975598 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.975642 master-0 kubenswrapper[26425]: I0217 15:46:14.975626 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:14.978235 master-0 kubenswrapper[26425]: I0217 15:46:14.978185 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 15:46:15.054558 master-0 kubenswrapper[26425]: I0217 15:46:15.052284 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 15:46:15.083598 master-0 kubenswrapper[26425]: I0217 15:46:15.082500 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.083598 master-0 kubenswrapper[26425]: I0217 15:46:15.081355 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.083598 master-0 kubenswrapper[26425]: I0217 15:46:15.082620 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7dx\" (UniqueName: \"kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.083972 master-0 kubenswrapper[26425]: I0217 15:46:15.083808 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.084365 master-0 kubenswrapper[26425]: I0217 15:46:15.083840 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.084560 master-0 kubenswrapper[26425]: I0217 15:46:15.084511 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.084560 master-0 kubenswrapper[26425]: I0217 15:46:15.084552 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.086003 master-0 kubenswrapper[26425]: I0217 15:46:15.084606 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.086003 master-0 kubenswrapper[26425]: I0217 15:46:15.085139 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.086003 master-0 kubenswrapper[26425]: I0217 15:46:15.085485 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.087411 master-0 kubenswrapper[26425]: I0217 15:46:15.087352 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.089272 master-0 kubenswrapper[26425]: I0217 15:46:15.088701 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.090341 master-0 kubenswrapper[26425]: I0217 15:46:15.090299 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.112544 master-0 kubenswrapper[26425]: I0217 15:46:15.112450 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7dx\" (UniqueName: \"kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx\") pod \"swift-ring-rebalance-4xb95\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.169157 master-0 kubenswrapper[26425]: I0217 15:46:15.169031 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" event={"ID":"3519978a-5c7a-4466-9ad6-5750be0683e2","Type":"ContainerStarted","Data":"92fee4fc7997cb3566a6b5d9545ef497de8da0d9eef826d1638fc018daec1f70"} Feb 17 15:46:15.169157 master-0 kubenswrapper[26425]: I0217 15:46:15.169112 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:15.172773 master-0 kubenswrapper[26425]: I0217 15:46:15.170865 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"753fb81c-8966-4b71-b31c-95deeed46228","Type":"ContainerStarted","Data":"bf9b5781b4884b3ed4d431f79f3200a2680d4d84872322a8094577827fd6595b"} Feb 17 15:46:15.172773 master-0 kubenswrapper[26425]: I0217 15:46:15.170939 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"753fb81c-8966-4b71-b31c-95deeed46228","Type":"ContainerStarted","Data":"9eec3aa1432fb508f01754e886e707e1a3a6c19a927d81441b8e7bd9ddd35cef"} Feb 17 15:46:15.196908 master-0 kubenswrapper[26425]: I0217 15:46:15.196828 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" podStartSLOduration=3.196808499 podStartE2EDuration="3.196808499s" podCreationTimestamp="2026-02-17 15:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:15.187769172 +0000 UTC m=+1837.079493010" watchObservedRunningTime="2026-02-17 15:46:15.196808499 +0000 UTC m=+1837.088532307" Feb 17 15:46:15.217669 master-0 kubenswrapper[26425]: I0217 15:46:15.217547 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.1587208430000002 podStartE2EDuration="4.217519125s" podCreationTimestamp="2026-02-17 15:46:11 +0000 UTC" firstStartedPulling="2026-02-17 15:46:13.026371358 +0000 UTC m=+1834.918095176" lastFinishedPulling="2026-02-17 15:46:14.08516964 +0000 UTC m=+1835.976893458" observedRunningTime="2026-02-17 15:46:15.214741099 +0000 UTC m=+1837.106464917" watchObservedRunningTime="2026-02-17 15:46:15.217519125 +0000 UTC m=+1837.109242973" Feb 17 15:46:15.232013 master-0 kubenswrapper[26425]: I0217 15:46:15.231959 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:15.781159 master-0 kubenswrapper[26425]: I0217 15:46:15.781104 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58883985-d49a-4529-bbce-ec8f3e112255\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0f5ab871-0d5e-4f70-8b0e-971c039dae4d\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:15.806169 master-0 kubenswrapper[26425]: I0217 15:46:15.805914 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:15.806169 master-0 kubenswrapper[26425]: E0217 15:46:15.806113 26425 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 15:46:15.806169 master-0 kubenswrapper[26425]: E0217 15:46:15.806130 26425 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 15:46:15.806169 master-0 kubenswrapper[26425]: E0217 15:46:15.806183 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift podName:0a9aa702-781f-4cf7-88c9-3ff414265810 nodeName:}" failed. No retries permitted until 2026-02-17 15:46:17.806162992 +0000 UTC m=+1839.697886820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift") pod "swift-storage-0" (UID: "0a9aa702-781f-4cf7-88c9-3ff414265810") : configmap "swift-ring-files" not found Feb 17 15:46:16.200195 master-0 kubenswrapper[26425]: I0217 15:46:16.200124 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 15:46:16.412601 master-0 kubenswrapper[26425]: I0217 15:46:16.412430 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44357786-5699-463c-95e7-f84257834fa0" path="/var/lib/kubelet/pods/44357786-5699-463c-95e7-f84257834fa0/volumes" Feb 17 15:46:16.468128 master-0 kubenswrapper[26425]: I0217 15:46:16.468085 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4xb95"] Feb 17 15:46:17.208976 master-0 kubenswrapper[26425]: I0217 15:46:17.208852 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4xb95" event={"ID":"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6","Type":"ContainerStarted","Data":"48f1ebe9564e81b7e47014396d38fc695f3e07999c41c63672cf2d7cf848192f"} Feb 17 15:46:17.479605 master-0 kubenswrapper[26425]: I0217 15:46:17.479380 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 15:46:17.559490 master-0 kubenswrapper[26425]: I0217 15:46:17.559417 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 15:46:17.858940 master-0 kubenswrapper[26425]: I0217 15:46:17.858812 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:17.859139 master-0 kubenswrapper[26425]: E0217 15:46:17.858943 26425 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 15:46:17.859139 master-0 kubenswrapper[26425]: E0217 15:46:17.858967 26425 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 15:46:17.859139 master-0 kubenswrapper[26425]: E0217 15:46:17.859024 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift podName:0a9aa702-781f-4cf7-88c9-3ff414265810 nodeName:}" failed. No retries permitted until 2026-02-17 15:46:21.859002322 +0000 UTC m=+1843.750726140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift") pod "swift-storage-0" (UID: "0a9aa702-781f-4cf7-88c9-3ff414265810") : configmap "swift-ring-files" not found Feb 17 15:46:18.594345 master-0 kubenswrapper[26425]: I0217 15:46:18.592669 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-737e-account-create-update-z4wjt"] Feb 17 15:46:18.596236 master-0 kubenswrapper[26425]: I0217 15:46:18.595749 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.599888 master-0 kubenswrapper[26425]: I0217 15:46:18.597701 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 15:46:18.677164 master-0 kubenswrapper[26425]: I0217 15:46:18.650253 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-trh26"] Feb 17 15:46:18.677164 master-0 kubenswrapper[26425]: I0217 15:46:18.652293 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.677164 master-0 kubenswrapper[26425]: I0217 15:46:18.676423 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-737e-account-create-update-z4wjt"] Feb 17 15:46:18.685568 master-0 kubenswrapper[26425]: I0217 15:46:18.684504 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-trh26"] Feb 17 15:46:18.737405 master-0 kubenswrapper[26425]: I0217 15:46:18.737078 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-094f-account-create-update-9dg59"] Feb 17 15:46:18.739740 master-0 kubenswrapper[26425]: I0217 15:46:18.738938 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:18.741075 master-0 kubenswrapper[26425]: I0217 15:46:18.741053 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 15:46:18.748328 master-0 kubenswrapper[26425]: I0217 15:46:18.748267 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-094f-account-create-update-9dg59"] Feb 17 15:46:18.770283 master-0 kubenswrapper[26425]: I0217 15:46:18.770241 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kjk8x"] Feb 17 15:46:18.771845 master-0 kubenswrapper[26425]: I0217 15:46:18.771824 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:18.786925 master-0 kubenswrapper[26425]: I0217 15:46:18.786568 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjk8x"] Feb 17 15:46:18.805372 master-0 kubenswrapper[26425]: I0217 15:46:18.803481 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.805372 master-0 kubenswrapper[26425]: I0217 15:46:18.803666 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ljxp\" (UniqueName: \"kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.805372 master-0 kubenswrapper[26425]: I0217 15:46:18.803731 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7jv\" (UniqueName: \"kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.805372 master-0 kubenswrapper[26425]: I0217 15:46:18.803875 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.906896 master-0 kubenswrapper[26425]: I0217 15:46:18.906825 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ljxp\" (UniqueName: \"kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.906896 master-0 kubenswrapper[26425]: I0217 15:46:18.906897 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7jv\" (UniqueName: \"kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.907245 master-0 kubenswrapper[26425]: I0217 15:46:18.906938 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zwm\" (UniqueName: \"kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:18.907757 master-0 kubenswrapper[26425]: I0217 15:46:18.907348 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.907757 master-0 kubenswrapper[26425]: I0217 15:46:18.907412 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thxk\" (UniqueName: \"kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:18.907757 master-0 kubenswrapper[26425]: I0217 15:46:18.907545 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:18.907757 master-0 kubenswrapper[26425]: I0217 15:46:18.907604 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.907757 master-0 kubenswrapper[26425]: I0217 15:46:18.907681 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:18.908830 master-0 kubenswrapper[26425]: I0217 15:46:18.908809 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.909555 master-0 kubenswrapper[26425]: I0217 15:46:18.909393 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.922919 master-0 kubenswrapper[26425]: I0217 15:46:18.922887 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7jv\" (UniqueName: \"kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv\") pod \"keystone-737e-account-create-update-z4wjt\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:18.923565 master-0 kubenswrapper[26425]: I0217 15:46:18.923524 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ljxp\" (UniqueName: \"kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp\") pod \"keystone-db-create-trh26\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " pod="openstack/keystone-db-create-trh26" Feb 17 15:46:18.983497 master-0 kubenswrapper[26425]: I0217 15:46:18.983395 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:19.011183 master-0 kubenswrapper[26425]: I0217 15:46:19.011132 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:19.011439 master-0 kubenswrapper[26425]: I0217 15:46:19.011252 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5zwm\" (UniqueName: \"kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:19.011439 master-0 kubenswrapper[26425]: I0217 15:46:19.011315 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7thxk\" (UniqueName: \"kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:19.011439 master-0 kubenswrapper[26425]: I0217 15:46:19.011431 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:19.013190 master-0 kubenswrapper[26425]: I0217 15:46:19.013142 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:19.013920 master-0 kubenswrapper[26425]: I0217 15:46:19.013880 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:19.055363 master-0 kubenswrapper[26425]: I0217 15:46:19.055306 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5zwm\" (UniqueName: \"kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm\") pod \"placement-db-create-kjk8x\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:19.058600 master-0 kubenswrapper[26425]: I0217 15:46:19.057596 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-trh26" Feb 17 15:46:19.071281 master-0 kubenswrapper[26425]: I0217 15:46:19.071229 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7thxk\" (UniqueName: \"kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk\") pod \"placement-094f-account-create-update-9dg59\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:19.113485 master-0 kubenswrapper[26425]: I0217 15:46:19.113365 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:19.363165 master-0 kubenswrapper[26425]: I0217 15:46:19.363103 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:20.379840 master-0 kubenswrapper[26425]: I0217 15:46:20.379774 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-094f-account-create-update-9dg59"] Feb 17 15:46:20.517396 master-0 kubenswrapper[26425]: I0217 15:46:20.517338 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-737e-account-create-update-z4wjt"] Feb 17 15:46:20.545159 master-0 kubenswrapper[26425]: I0217 15:46:20.545078 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjk8x"] Feb 17 15:46:20.700098 master-0 kubenswrapper[26425]: I0217 15:46:20.699897 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-trh26"] Feb 17 15:46:20.707655 master-0 kubenswrapper[26425]: W0217 15:46:20.707039 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c96a413_ef0a_47d1_86cd_e3f1caec1368.slice/crio-5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f WatchSource:0}: Error finding container 5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f: Status 404 returned error can't find the container with id 5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f Feb 17 15:46:20.829146 master-0 kubenswrapper[26425]: I0217 15:46:20.829110 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qfrvt"] Feb 17 15:46:20.833334 master-0 kubenswrapper[26425]: I0217 15:46:20.833287 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:20.845157 master-0 kubenswrapper[26425]: I0217 15:46:20.845041 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qfrvt"] Feb 17 15:46:20.980995 master-0 kubenswrapper[26425]: I0217 15:46:20.980920 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:20.981193 master-0 kubenswrapper[26425]: I0217 15:46:20.981008 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276z5\" (UniqueName: \"kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.042424 master-0 kubenswrapper[26425]: I0217 15:46:21.042286 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4c91-account-create-update-b2plp"] Feb 17 15:46:21.044143 master-0 kubenswrapper[26425]: I0217 15:46:21.044114 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.051519 master-0 kubenswrapper[26425]: I0217 15:46:21.051428 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 15:46:21.054061 master-0 kubenswrapper[26425]: I0217 15:46:21.054017 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4c91-account-create-update-b2plp"] Feb 17 15:46:21.085975 master-0 kubenswrapper[26425]: I0217 15:46:21.085917 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.086243 master-0 kubenswrapper[26425]: I0217 15:46:21.086047 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcn2q\" (UniqueName: \"kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.086243 master-0 kubenswrapper[26425]: I0217 15:46:21.086175 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.086243 master-0 kubenswrapper[26425]: I0217 15:46:21.086222 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276z5\" (UniqueName: \"kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.087675 master-0 kubenswrapper[26425]: I0217 15:46:21.087640 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.103726 master-0 kubenswrapper[26425]: I0217 15:46:21.103658 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276z5\" (UniqueName: \"kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5\") pod \"glance-db-create-qfrvt\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.188084 master-0 kubenswrapper[26425]: I0217 15:46:21.187911 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcn2q\" (UniqueName: \"kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.188355 master-0 kubenswrapper[26425]: I0217 15:46:21.188123 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:46:21.188355 master-0 kubenswrapper[26425]: E0217 15:46:21.188225 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:46:21.188355 master-0 kubenswrapper[26425]: E0217 15:46:21.188285 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:46:21.188355 master-0 kubenswrapper[26425]: E0217 15:46:21.188338 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:48:23.188319734 +0000 UTC m=+1965.080043552 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:46:21.188596 master-0 kubenswrapper[26425]: I0217 15:46:21.188379 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.189332 master-0 kubenswrapper[26425]: I0217 15:46:21.189289 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.215134 master-0 kubenswrapper[26425]: I0217 15:46:21.215064 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcn2q\" (UniqueName: \"kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q\") pod \"glance-4c91-account-create-update-b2plp\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.266840 master-0 kubenswrapper[26425]: I0217 15:46:21.266719 26425 generic.go:334] "Generic (PLEG): container finished" podID="5a905daa-8d29-41e8-a6ce-64b0f1b1b249" containerID="1360b35c0b6b274bd2b7765c4a556c074a5e426f8981694a24d881b422d32819" exitCode=0 Feb 17 15:46:21.267004 master-0 kubenswrapper[26425]: I0217 15:46:21.266868 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-737e-account-create-update-z4wjt" event={"ID":"5a905daa-8d29-41e8-a6ce-64b0f1b1b249","Type":"ContainerDied","Data":"1360b35c0b6b274bd2b7765c4a556c074a5e426f8981694a24d881b422d32819"} Feb 17 15:46:21.267004 master-0 kubenswrapper[26425]: I0217 15:46:21.266903 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-737e-account-create-update-z4wjt" event={"ID":"5a905daa-8d29-41e8-a6ce-64b0f1b1b249","Type":"ContainerStarted","Data":"913f35e2e096425d825959d48b0833f890cd18bbde786ba07310643c09529759"} Feb 17 15:46:21.269353 master-0 kubenswrapper[26425]: I0217 15:46:21.269287 26425 generic.go:334] "Generic (PLEG): container finished" podID="5c96a413-ef0a-47d1-86cd-e3f1caec1368" containerID="8ed57e22fab684de8deffd37a6e9489178c8f904ff19835d9ade5f01899276e9" exitCode=0 Feb 17 15:46:21.269353 master-0 kubenswrapper[26425]: I0217 15:46:21.269314 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-trh26" event={"ID":"5c96a413-ef0a-47d1-86cd-e3f1caec1368","Type":"ContainerDied","Data":"8ed57e22fab684de8deffd37a6e9489178c8f904ff19835d9ade5f01899276e9"} Feb 17 15:46:21.269353 master-0 kubenswrapper[26425]: I0217 15:46:21.269355 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-trh26" event={"ID":"5c96a413-ef0a-47d1-86cd-e3f1caec1368","Type":"ContainerStarted","Data":"5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f"} Feb 17 15:46:21.271029 master-0 kubenswrapper[26425]: I0217 15:46:21.270960 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:21.273097 master-0 kubenswrapper[26425]: I0217 15:46:21.273032 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4xb95" event={"ID":"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6","Type":"ContainerStarted","Data":"293b84de33728b619c3cab27a74ed1be9399bdbb19617ae86da678bc438d5b97"} Feb 17 15:46:21.277934 master-0 kubenswrapper[26425]: I0217 15:46:21.277866 26425 generic.go:334] "Generic (PLEG): container finished" podID="386356e6-e395-4f3d-a52e-2228263bdc65" containerID="77ea48c09a5380a7b33b4112f09f82efc83f1381286b3d5bdc0551461d8c76a4" exitCode=0 Feb 17 15:46:21.278110 master-0 kubenswrapper[26425]: I0217 15:46:21.277952 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-094f-account-create-update-9dg59" event={"ID":"386356e6-e395-4f3d-a52e-2228263bdc65","Type":"ContainerDied","Data":"77ea48c09a5380a7b33b4112f09f82efc83f1381286b3d5bdc0551461d8c76a4"} Feb 17 15:46:21.278110 master-0 kubenswrapper[26425]: I0217 15:46:21.277997 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-094f-account-create-update-9dg59" event={"ID":"386356e6-e395-4f3d-a52e-2228263bdc65","Type":"ContainerStarted","Data":"98b448cc63b6b44c5af6cfff3f895b809e5eced5fced43f40cb57366622fa82b"} Feb 17 15:46:21.279725 master-0 kubenswrapper[26425]: I0217 15:46:21.279648 26425 generic.go:334] "Generic (PLEG): container finished" podID="d08335f3-bb90-4f16-baa9-55622ccb587e" containerID="6d64391aa8ed49e80aecefcd3ccfa52ae7fa012ae77bda15e9cd74ebbc44fe74" exitCode=0 Feb 17 15:46:21.279725 master-0 kubenswrapper[26425]: I0217 15:46:21.279679 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjk8x" event={"ID":"d08335f3-bb90-4f16-baa9-55622ccb587e","Type":"ContainerDied","Data":"6d64391aa8ed49e80aecefcd3ccfa52ae7fa012ae77bda15e9cd74ebbc44fe74"} Feb 17 15:46:21.279725 master-0 kubenswrapper[26425]: I0217 15:46:21.279699 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjk8x" event={"ID":"d08335f3-bb90-4f16-baa9-55622ccb587e","Type":"ContainerStarted","Data":"4330558dfb776de3f8ac317df492b28a43d9920269c996bd3f9a2dbf0540fbc5"} Feb 17 15:46:21.282444 master-0 kubenswrapper[26425]: I0217 15:46:21.282378 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:21.372378 master-0 kubenswrapper[26425]: I0217 15:46:21.372254 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4xb95" podStartSLOduration=3.87307462 podStartE2EDuration="7.372235285s" podCreationTimestamp="2026-02-17 15:46:14 +0000 UTC" firstStartedPulling="2026-02-17 15:46:16.481956899 +0000 UTC m=+1838.373680737" lastFinishedPulling="2026-02-17 15:46:19.981117584 +0000 UTC m=+1841.872841402" observedRunningTime="2026-02-17 15:46:21.358073776 +0000 UTC m=+1843.249797614" watchObservedRunningTime="2026-02-17 15:46:21.372235285 +0000 UTC m=+1843.263959093" Feb 17 15:46:21.761510 master-0 kubenswrapper[26425]: I0217 15:46:21.761427 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qfrvt"] Feb 17 15:46:21.906192 master-0 kubenswrapper[26425]: I0217 15:46:21.906128 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4c91-account-create-update-b2plp"] Feb 17 15:46:21.912925 master-0 kubenswrapper[26425]: I0217 15:46:21.912853 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:21.913300 master-0 kubenswrapper[26425]: E0217 15:46:21.913251 26425 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 15:46:21.913353 master-0 kubenswrapper[26425]: E0217 15:46:21.913304 26425 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 15:46:21.913393 master-0 kubenswrapper[26425]: E0217 15:46:21.913379 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift podName:0a9aa702-781f-4cf7-88c9-3ff414265810 nodeName:}" failed. No retries permitted until 2026-02-17 15:46:29.913353742 +0000 UTC m=+1851.805077550 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift") pod "swift-storage-0" (UID: "0a9aa702-781f-4cf7-88c9-3ff414265810") : configmap "swift-ring-files" not found Feb 17 15:46:21.922222 master-0 kubenswrapper[26425]: W0217 15:46:21.922150 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c6fa13_5c49_4e83_9492_208c6cd1fb61.slice/crio-d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d WatchSource:0}: Error finding container d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d: Status 404 returned error can't find the container with id d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d Feb 17 15:46:22.293542 master-0 kubenswrapper[26425]: I0217 15:46:22.293448 26425 generic.go:334] "Generic (PLEG): container finished" podID="cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" containerID="6cab4fdac51574937282da3ae98732e403df6a62c674c94f128a8f3581681ee1" exitCode=0 Feb 17 15:46:22.293542 master-0 kubenswrapper[26425]: I0217 15:46:22.293531 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qfrvt" event={"ID":"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2","Type":"ContainerDied","Data":"6cab4fdac51574937282da3ae98732e403df6a62c674c94f128a8f3581681ee1"} Feb 17 15:46:22.293542 master-0 kubenswrapper[26425]: I0217 15:46:22.293557 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qfrvt" event={"ID":"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2","Type":"ContainerStarted","Data":"94c0d6e2f3bcf571f5a0dcabc0204183affa5ae5b2b30a13ff101b72299f0fdd"} Feb 17 15:46:22.296309 master-0 kubenswrapper[26425]: I0217 15:46:22.296239 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4c91-account-create-update-b2plp" event={"ID":"a8c6fa13-5c49-4e83-9492-208c6cd1fb61","Type":"ContainerStarted","Data":"7cb2ed632a92a11678708ebbbb548ed18cc865a4d4414aee575f015b4ac3728e"} Feb 17 15:46:22.296309 master-0 kubenswrapper[26425]: I0217 15:46:22.296278 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4c91-account-create-update-b2plp" event={"ID":"a8c6fa13-5c49-4e83-9492-208c6cd1fb61","Type":"ContainerStarted","Data":"d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d"} Feb 17 15:46:22.347938 master-0 kubenswrapper[26425]: I0217 15:46:22.347828 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4c91-account-create-update-b2plp" podStartSLOduration=1.34780743 podStartE2EDuration="1.34780743s" podCreationTimestamp="2026-02-17 15:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:22.334231075 +0000 UTC m=+1844.225954913" watchObservedRunningTime="2026-02-17 15:46:22.34780743 +0000 UTC m=+1844.239531258" Feb 17 15:46:22.590767 master-0 kubenswrapper[26425]: I0217 15:46:22.590683 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:46:22.719498 master-0 kubenswrapper[26425]: I0217 15:46:22.719360 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:46:22.719715 master-0 kubenswrapper[26425]: I0217 15:46:22.719636 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="dnsmasq-dns" containerID="cri-o://1864e8a47379b369d8a66077175769f37b5a488774750f04959a1eeab4ee3e75" gracePeriod=10 Feb 17 15:46:22.848794 master-0 kubenswrapper[26425]: I0217 15:46:22.848725 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.176:5353: connect: connection refused" Feb 17 15:46:22.969977 master-0 kubenswrapper[26425]: I0217 15:46:22.969854 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:23.144637 master-0 kubenswrapper[26425]: I0217 15:46:23.144581 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5zwm\" (UniqueName: \"kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm\") pod \"d08335f3-bb90-4f16-baa9-55622ccb587e\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " Feb 17 15:46:23.144862 master-0 kubenswrapper[26425]: I0217 15:46:23.144655 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts\") pod \"d08335f3-bb90-4f16-baa9-55622ccb587e\" (UID: \"d08335f3-bb90-4f16-baa9-55622ccb587e\") " Feb 17 15:46:23.147348 master-0 kubenswrapper[26425]: I0217 15:46:23.145567 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d08335f3-bb90-4f16-baa9-55622ccb587e" (UID: "d08335f3-bb90-4f16-baa9-55622ccb587e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.154094 master-0 kubenswrapper[26425]: I0217 15:46:23.153688 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm" (OuterVolumeSpecName: "kube-api-access-v5zwm") pod "d08335f3-bb90-4f16-baa9-55622ccb587e" (UID: "d08335f3-bb90-4f16-baa9-55622ccb587e"). InnerVolumeSpecName "kube-api-access-v5zwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:23.247906 master-0 kubenswrapper[26425]: I0217 15:46:23.246950 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5zwm\" (UniqueName: \"kubernetes.io/projected/d08335f3-bb90-4f16-baa9-55622ccb587e-kube-api-access-v5zwm\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.247906 master-0 kubenswrapper[26425]: I0217 15:46:23.246998 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08335f3-bb90-4f16-baa9-55622ccb587e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.276739 master-0 kubenswrapper[26425]: I0217 15:46:23.276664 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-trh26" Feb 17 15:46:23.283386 master-0 kubenswrapper[26425]: I0217 15:46:23.283324 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:23.291701 master-0 kubenswrapper[26425]: I0217 15:46:23.291646 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.330749 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-trh26" event={"ID":"5c96a413-ef0a-47d1-86cd-e3f1caec1368","Type":"ContainerDied","Data":"5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f"} Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.330798 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c2f6e70a4c1569c08ad12b77cf732e71618895bf4975cd39163dc275f1eba9f" Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.330849 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-trh26" Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.333416 26425 generic.go:334] "Generic (PLEG): container finished" podID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerID="1864e8a47379b369d8a66077175769f37b5a488774750f04959a1eeab4ee3e75" exitCode=0 Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.333492 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" event={"ID":"a2122296-6151-4ec0-b71c-fd6ad516ffb4","Type":"ContainerDied","Data":"1864e8a47379b369d8a66077175769f37b5a488774750f04959a1eeab4ee3e75"} Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.333514 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" event={"ID":"a2122296-6151-4ec0-b71c-fd6ad516ffb4","Type":"ContainerDied","Data":"790787ab3234f90798b9baceb04f169d2319af35ae6d632582202e48dc4b42d1"} Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.333526 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790787ab3234f90798b9baceb04f169d2319af35ae6d632582202e48dc4b42d1" Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.339776 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-094f-account-create-update-9dg59" event={"ID":"386356e6-e395-4f3d-a52e-2228263bdc65","Type":"ContainerDied","Data":"98b448cc63b6b44c5af6cfff3f895b809e5eced5fced43f40cb57366622fa82b"} Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.339827 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b448cc63b6b44c5af6cfff3f895b809e5eced5fced43f40cb57366622fa82b" Feb 17 15:46:23.342495 master-0 kubenswrapper[26425]: I0217 15:46:23.339902 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-094f-account-create-update-9dg59" Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.343829 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjk8x" event={"ID":"d08335f3-bb90-4f16-baa9-55622ccb587e","Type":"ContainerDied","Data":"4330558dfb776de3f8ac317df492b28a43d9920269c996bd3f9a2dbf0540fbc5"} Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.343891 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4330558dfb776de3f8ac317df492b28a43d9920269c996bd3f9a2dbf0540fbc5" Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.343962 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjk8x" Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.348670 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-737e-account-create-update-z4wjt" Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.346989 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-737e-account-create-update-z4wjt" event={"ID":"5a905daa-8d29-41e8-a6ce-64b0f1b1b249","Type":"ContainerDied","Data":"913f35e2e096425d825959d48b0833f890cd18bbde786ba07310643c09529759"} Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.349204 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913f35e2e096425d825959d48b0833f890cd18bbde786ba07310643c09529759" Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.350077 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts\") pod \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.350436 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ljxp\" (UniqueName: \"kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp\") pod \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\" (UID: \"5c96a413-ef0a-47d1-86cd-e3f1caec1368\") " Feb 17 15:46:23.361381 master-0 kubenswrapper[26425]: I0217 15:46:23.357220 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c96a413-ef0a-47d1-86cd-e3f1caec1368" (UID: "5c96a413-ef0a-47d1-86cd-e3f1caec1368"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.367559 master-0 kubenswrapper[26425]: I0217 15:46:23.365717 26425 generic.go:334] "Generic (PLEG): container finished" podID="a8c6fa13-5c49-4e83-9492-208c6cd1fb61" containerID="7cb2ed632a92a11678708ebbbb548ed18cc865a4d4414aee575f015b4ac3728e" exitCode=0 Feb 17 15:46:23.367559 master-0 kubenswrapper[26425]: I0217 15:46:23.366005 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4c91-account-create-update-b2plp" event={"ID":"a8c6fa13-5c49-4e83-9492-208c6cd1fb61","Type":"ContainerDied","Data":"7cb2ed632a92a11678708ebbbb548ed18cc865a4d4414aee575f015b4ac3728e"} Feb 17 15:46:23.391893 master-0 kubenswrapper[26425]: I0217 15:46:23.391824 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp" (OuterVolumeSpecName: "kube-api-access-9ljxp") pod "5c96a413-ef0a-47d1-86cd-e3f1caec1368" (UID: "5c96a413-ef0a-47d1-86cd-e3f1caec1368"). InnerVolumeSpecName "kube-api-access-9ljxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:23.449064 master-0 kubenswrapper[26425]: I0217 15:46:23.449023 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:46:23.453204 master-0 kubenswrapper[26425]: I0217 15:46:23.452663 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts\") pod \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " Feb 17 15:46:23.453204 master-0 kubenswrapper[26425]: I0217 15:46:23.452774 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7jv\" (UniqueName: \"kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv\") pod \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\" (UID: \"5a905daa-8d29-41e8-a6ce-64b0f1b1b249\") " Feb 17 15:46:23.453204 master-0 kubenswrapper[26425]: I0217 15:46:23.452853 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts\") pod \"386356e6-e395-4f3d-a52e-2228263bdc65\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " Feb 17 15:46:23.453334 master-0 kubenswrapper[26425]: I0217 15:46:23.453274 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7thxk\" (UniqueName: \"kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk\") pod \"386356e6-e395-4f3d-a52e-2228263bdc65\" (UID: \"386356e6-e395-4f3d-a52e-2228263bdc65\") " Feb 17 15:46:23.453551 master-0 kubenswrapper[26425]: I0217 15:46:23.453502 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a905daa-8d29-41e8-a6ce-64b0f1b1b249" (UID: "5a905daa-8d29-41e8-a6ce-64b0f1b1b249"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.453653 master-0 kubenswrapper[26425]: I0217 15:46:23.453618 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "386356e6-e395-4f3d-a52e-2228263bdc65" (UID: "386356e6-e395-4f3d-a52e-2228263bdc65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.454358 master-0 kubenswrapper[26425]: I0217 15:46:23.454328 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/386356e6-e395-4f3d-a52e-2228263bdc65-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.454413 master-0 kubenswrapper[26425]: I0217 15:46:23.454358 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ljxp\" (UniqueName: \"kubernetes.io/projected/5c96a413-ef0a-47d1-86cd-e3f1caec1368-kube-api-access-9ljxp\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.454413 master-0 kubenswrapper[26425]: I0217 15:46:23.454384 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c96a413-ef0a-47d1-86cd-e3f1caec1368-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.454413 master-0 kubenswrapper[26425]: I0217 15:46:23.454402 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.455757 master-0 kubenswrapper[26425]: I0217 15:46:23.455607 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv" (OuterVolumeSpecName: "kube-api-access-kd7jv") pod "5a905daa-8d29-41e8-a6ce-64b0f1b1b249" (UID: "5a905daa-8d29-41e8-a6ce-64b0f1b1b249"). InnerVolumeSpecName "kube-api-access-kd7jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:23.456444 master-0 kubenswrapper[26425]: I0217 15:46:23.456406 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk" (OuterVolumeSpecName: "kube-api-access-7thxk") pod "386356e6-e395-4f3d-a52e-2228263bdc65" (UID: "386356e6-e395-4f3d-a52e-2228263bdc65"). InnerVolumeSpecName "kube-api-access-7thxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:23.555323 master-0 kubenswrapper[26425]: I0217 15:46:23.555171 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc\") pod \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " Feb 17 15:46:23.555715 master-0 kubenswrapper[26425]: I0217 15:46:23.555685 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpkv\" (UniqueName: \"kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv\") pod \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " Feb 17 15:46:23.555779 master-0 kubenswrapper[26425]: I0217 15:46:23.555748 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config\") pod \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\" (UID: \"a2122296-6151-4ec0-b71c-fd6ad516ffb4\") " Feb 17 15:46:23.556960 master-0 kubenswrapper[26425]: I0217 15:46:23.556875 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7jv\" (UniqueName: \"kubernetes.io/projected/5a905daa-8d29-41e8-a6ce-64b0f1b1b249-kube-api-access-kd7jv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.556960 master-0 kubenswrapper[26425]: I0217 15:46:23.556952 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7thxk\" (UniqueName: \"kubernetes.io/projected/386356e6-e395-4f3d-a52e-2228263bdc65-kube-api-access-7thxk\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.564226 master-0 kubenswrapper[26425]: I0217 15:46:23.562822 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv" (OuterVolumeSpecName: "kube-api-access-kxpkv") pod "a2122296-6151-4ec0-b71c-fd6ad516ffb4" (UID: "a2122296-6151-4ec0-b71c-fd6ad516ffb4"). InnerVolumeSpecName "kube-api-access-kxpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:23.602096 master-0 kubenswrapper[26425]: I0217 15:46:23.601982 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2122296-6151-4ec0-b71c-fd6ad516ffb4" (UID: "a2122296-6151-4ec0-b71c-fd6ad516ffb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.630221 master-0 kubenswrapper[26425]: I0217 15:46:23.628537 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config" (OuterVolumeSpecName: "config") pod "a2122296-6151-4ec0-b71c-fd6ad516ffb4" (UID: "a2122296-6151-4ec0-b71c-fd6ad516ffb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.660251 master-0 kubenswrapper[26425]: I0217 15:46:23.660153 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.660251 master-0 kubenswrapper[26425]: I0217 15:46:23.660210 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpkv\" (UniqueName: \"kubernetes.io/projected/a2122296-6151-4ec0-b71c-fd6ad516ffb4-kube-api-access-kxpkv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.660251 master-0 kubenswrapper[26425]: I0217 15:46:23.660224 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2122296-6151-4ec0-b71c-fd6ad516ffb4-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.806352 master-0 kubenswrapper[26425]: I0217 15:46:23.804427 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893325 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sqtzz"] Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.893841 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a905daa-8d29-41e8-a6ce-64b0f1b1b249" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893862 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a905daa-8d29-41e8-a6ce-64b0f1b1b249" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.893887 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c96a413-ef0a-47d1-86cd-e3f1caec1368" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893896 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c96a413-ef0a-47d1-86cd-e3f1caec1368" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.893923 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="init" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893932 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="init" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.893946 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="dnsmasq-dns" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893956 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="dnsmasq-dns" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.893989 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.893997 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.894024 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08335f3-bb90-4f16-baa9-55622ccb587e" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894032 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08335f3-bb90-4f16-baa9-55622ccb587e" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: E0217 15:46:23.894048 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386356e6-e395-4f3d-a52e-2228263bdc65" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894056 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="386356e6-e395-4f3d-a52e-2228263bdc65" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894358 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c96a413-ef0a-47d1-86cd-e3f1caec1368" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894398 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a905daa-8d29-41e8-a6ce-64b0f1b1b249" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894423 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08335f3-bb90-4f16-baa9-55622ccb587e" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894443 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" containerName="dnsmasq-dns" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894496 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="386356e6-e395-4f3d-a52e-2228263bdc65" containerName="mariadb-account-create-update" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.894521 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" containerName="mariadb-database-create" Feb 17 15:46:23.896275 master-0 kubenswrapper[26425]: I0217 15:46:23.895492 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:23.897911 master-0 kubenswrapper[26425]: I0217 15:46:23.897533 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 15:46:23.943678 master-0 kubenswrapper[26425]: I0217 15:46:23.943599 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqtzz"] Feb 17 15:46:23.972472 master-0 kubenswrapper[26425]: I0217 15:46:23.972407 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276z5\" (UniqueName: \"kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5\") pod \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " Feb 17 15:46:23.972709 master-0 kubenswrapper[26425]: I0217 15:46:23.972535 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts\") pod \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\" (UID: \"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2\") " Feb 17 15:46:23.973201 master-0 kubenswrapper[26425]: I0217 15:46:23.973077 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" (UID: "cdaf2a40-bdbe-47b5-9b1f-42582d1301a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:23.973295 master-0 kubenswrapper[26425]: I0217 15:46:23.973224 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:23.973339 master-0 kubenswrapper[26425]: I0217 15:46:23.973306 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws5v\" (UniqueName: \"kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:23.973646 master-0 kubenswrapper[26425]: I0217 15:46:23.973612 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:23.975393 master-0 kubenswrapper[26425]: I0217 15:46:23.975337 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5" (OuterVolumeSpecName: "kube-api-access-276z5") pod "cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" (UID: "cdaf2a40-bdbe-47b5-9b1f-42582d1301a2"). InnerVolumeSpecName "kube-api-access-276z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:24.075948 master-0 kubenswrapper[26425]: I0217 15:46:24.075796 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:24.076179 master-0 kubenswrapper[26425]: I0217 15:46:24.075973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws5v\" (UniqueName: \"kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:24.076179 master-0 kubenswrapper[26425]: I0217 15:46:24.076174 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276z5\" (UniqueName: \"kubernetes.io/projected/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2-kube-api-access-276z5\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:24.076816 master-0 kubenswrapper[26425]: I0217 15:46:24.076757 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:24.093121 master-0 kubenswrapper[26425]: I0217 15:46:24.093028 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws5v\" (UniqueName: \"kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v\") pod \"root-account-create-update-sqtzz\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:24.223713 master-0 kubenswrapper[26425]: I0217 15:46:24.223639 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:24.398363 master-0 kubenswrapper[26425]: I0217 15:46:24.396838 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hdh27" Feb 17 15:46:24.398363 master-0 kubenswrapper[26425]: I0217 15:46:24.397208 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qfrvt" Feb 17 15:46:24.413484 master-0 kubenswrapper[26425]: I0217 15:46:24.413399 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qfrvt" event={"ID":"cdaf2a40-bdbe-47b5-9b1f-42582d1301a2","Type":"ContainerDied","Data":"94c0d6e2f3bcf571f5a0dcabc0204183affa5ae5b2b30a13ff101b72299f0fdd"} Feb 17 15:46:24.413484 master-0 kubenswrapper[26425]: I0217 15:46:24.413474 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c0d6e2f3bcf571f5a0dcabc0204183affa5ae5b2b30a13ff101b72299f0fdd" Feb 17 15:46:24.450137 master-0 kubenswrapper[26425]: I0217 15:46:24.450060 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:46:24.464483 master-0 kubenswrapper[26425]: I0217 15:46:24.464405 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hdh27"] Feb 17 15:46:24.679632 master-0 kubenswrapper[26425]: I0217 15:46:24.679427 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqtzz"] Feb 17 15:46:24.859033 master-0 kubenswrapper[26425]: I0217 15:46:24.858986 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:24.998968 master-0 kubenswrapper[26425]: I0217 15:46:24.998213 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts\") pod \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " Feb 17 15:46:24.998968 master-0 kubenswrapper[26425]: I0217 15:46:24.998381 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcn2q\" (UniqueName: \"kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q\") pod \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\" (UID: \"a8c6fa13-5c49-4e83-9492-208c6cd1fb61\") " Feb 17 15:46:24.999570 master-0 kubenswrapper[26425]: I0217 15:46:24.999435 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8c6fa13-5c49-4e83-9492-208c6cd1fb61" (UID: "a8c6fa13-5c49-4e83-9492-208c6cd1fb61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:25.002862 master-0 kubenswrapper[26425]: I0217 15:46:25.002798 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q" (OuterVolumeSpecName: "kube-api-access-zcn2q") pod "a8c6fa13-5c49-4e83-9492-208c6cd1fb61" (UID: "a8c6fa13-5c49-4e83-9492-208c6cd1fb61"). InnerVolumeSpecName "kube-api-access-zcn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:25.102108 master-0 kubenswrapper[26425]: I0217 15:46:25.102025 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:25.102108 master-0 kubenswrapper[26425]: I0217 15:46:25.102104 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcn2q\" (UniqueName: \"kubernetes.io/projected/a8c6fa13-5c49-4e83-9492-208c6cd1fb61-kube-api-access-zcn2q\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:25.411941 master-0 kubenswrapper[26425]: I0217 15:46:25.411863 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4c91-account-create-update-b2plp" event={"ID":"a8c6fa13-5c49-4e83-9492-208c6cd1fb61","Type":"ContainerDied","Data":"d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d"} Feb 17 15:46:25.411941 master-0 kubenswrapper[26425]: I0217 15:46:25.411947 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4abd598c4530c6d8c50dbb1f12f7d2f61fa82bbef48a8bf9680b49ab774da6d" Feb 17 15:46:25.412348 master-0 kubenswrapper[26425]: I0217 15:46:25.411880 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4c91-account-create-update-b2plp" Feb 17 15:46:25.414127 master-0 kubenswrapper[26425]: I0217 15:46:25.414072 26425 generic.go:334] "Generic (PLEG): container finished" podID="60423849-80c7-459d-a2b2-68f9f449d899" containerID="3c1068a5d4af9b8119b312ffb45920f47a7119c267cb19cbcc60b8594e5e290e" exitCode=0 Feb 17 15:46:25.414254 master-0 kubenswrapper[26425]: I0217 15:46:25.414130 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqtzz" event={"ID":"60423849-80c7-459d-a2b2-68f9f449d899","Type":"ContainerDied","Data":"3c1068a5d4af9b8119b312ffb45920f47a7119c267cb19cbcc60b8594e5e290e"} Feb 17 15:46:25.414254 master-0 kubenswrapper[26425]: I0217 15:46:25.414163 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqtzz" event={"ID":"60423849-80c7-459d-a2b2-68f9f449d899","Type":"ContainerStarted","Data":"b8e9c44eb1258b9ae5a73ce91571374b0dde486e9cee3b7d5e2a1164aba0f413"} Feb 17 15:46:26.311169 master-0 kubenswrapper[26425]: I0217 15:46:26.311090 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-88f2d"] Feb 17 15:46:26.312197 master-0 kubenswrapper[26425]: E0217 15:46:26.311688 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c6fa13-5c49-4e83-9492-208c6cd1fb61" containerName="mariadb-account-create-update" Feb 17 15:46:26.312197 master-0 kubenswrapper[26425]: I0217 15:46:26.311712 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c6fa13-5c49-4e83-9492-208c6cd1fb61" containerName="mariadb-account-create-update" Feb 17 15:46:26.312197 master-0 kubenswrapper[26425]: I0217 15:46:26.312048 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c6fa13-5c49-4e83-9492-208c6cd1fb61" containerName="mariadb-account-create-update" Feb 17 15:46:26.313210 master-0 kubenswrapper[26425]: I0217 15:46:26.313151 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.316264 master-0 kubenswrapper[26425]: I0217 15:46:26.316187 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-config-data" Feb 17 15:46:26.326704 master-0 kubenswrapper[26425]: I0217 15:46:26.326543 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-88f2d"] Feb 17 15:46:26.428734 master-0 kubenswrapper[26425]: I0217 15:46:26.428538 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2122296-6151-4ec0-b71c-fd6ad516ffb4" path="/var/lib/kubelet/pods/a2122296-6151-4ec0-b71c-fd6ad516ffb4/volumes" Feb 17 15:46:26.455111 master-0 kubenswrapper[26425]: I0217 15:46:26.454991 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sksdh\" (UniqueName: \"kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.455520 master-0 kubenswrapper[26425]: I0217 15:46:26.455171 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.455520 master-0 kubenswrapper[26425]: I0217 15:46:26.455225 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.455856 master-0 kubenswrapper[26425]: I0217 15:46:26.455732 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.558229 master-0 kubenswrapper[26425]: I0217 15:46:26.558012 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.558875 master-0 kubenswrapper[26425]: I0217 15:46:26.558257 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.558875 master-0 kubenswrapper[26425]: I0217 15:46:26.558716 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sksdh\" (UniqueName: \"kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.558875 master-0 kubenswrapper[26425]: I0217 15:46:26.558762 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.575609 master-0 kubenswrapper[26425]: I0217 15:46:26.566037 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.575609 master-0 kubenswrapper[26425]: I0217 15:46:26.566605 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.575609 master-0 kubenswrapper[26425]: I0217 15:46:26.567398 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.597627 master-0 kubenswrapper[26425]: I0217 15:46:26.594556 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sksdh\" (UniqueName: \"kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh\") pod \"glance-db-sync-88f2d\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.667396 master-0 kubenswrapper[26425]: I0217 15:46:26.667303 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:26.883932 master-0 kubenswrapper[26425]: I0217 15:46:26.883867 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:27.072342 master-0 kubenswrapper[26425]: I0217 15:46:27.068629 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts\") pod \"60423849-80c7-459d-a2b2-68f9f449d899\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " Feb 17 15:46:27.072342 master-0 kubenswrapper[26425]: I0217 15:46:27.068924 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pws5v\" (UniqueName: \"kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v\") pod \"60423849-80c7-459d-a2b2-68f9f449d899\" (UID: \"60423849-80c7-459d-a2b2-68f9f449d899\") " Feb 17 15:46:27.072342 master-0 kubenswrapper[26425]: I0217 15:46:27.069504 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60423849-80c7-459d-a2b2-68f9f449d899" (UID: "60423849-80c7-459d-a2b2-68f9f449d899"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:27.073813 master-0 kubenswrapper[26425]: I0217 15:46:27.073744 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v" (OuterVolumeSpecName: "kube-api-access-pws5v") pod "60423849-80c7-459d-a2b2-68f9f449d899" (UID: "60423849-80c7-459d-a2b2-68f9f449d899"). InnerVolumeSpecName "kube-api-access-pws5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:27.171355 master-0 kubenswrapper[26425]: I0217 15:46:27.171290 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60423849-80c7-459d-a2b2-68f9f449d899-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:27.171355 master-0 kubenswrapper[26425]: I0217 15:46:27.171343 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pws5v\" (UniqueName: \"kubernetes.io/projected/60423849-80c7-459d-a2b2-68f9f449d899-kube-api-access-pws5v\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:27.261446 master-0 kubenswrapper[26425]: I0217 15:46:27.261322 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-88f2d"] Feb 17 15:46:27.278786 master-0 kubenswrapper[26425]: W0217 15:46:27.278397 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8d86a11_7897_4196_93bb_916b7472a6e0.slice/crio-b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba WatchSource:0}: Error finding container b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba: Status 404 returned error can't find the container with id b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba Feb 17 15:46:27.438610 master-0 kubenswrapper[26425]: I0217 15:46:27.438477 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqtzz" event={"ID":"60423849-80c7-459d-a2b2-68f9f449d899","Type":"ContainerDied","Data":"b8e9c44eb1258b9ae5a73ce91571374b0dde486e9cee3b7d5e2a1164aba0f413"} Feb 17 15:46:27.438610 master-0 kubenswrapper[26425]: I0217 15:46:27.438517 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqtzz" Feb 17 15:46:27.438610 master-0 kubenswrapper[26425]: I0217 15:46:27.438552 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e9c44eb1258b9ae5a73ce91571374b0dde486e9cee3b7d5e2a1164aba0f413" Feb 17 15:46:27.440005 master-0 kubenswrapper[26425]: I0217 15:46:27.439973 26425 generic.go:334] "Generic (PLEG): container finished" podID="4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" containerID="293b84de33728b619c3cab27a74ed1be9399bdbb19617ae86da678bc438d5b97" exitCode=0 Feb 17 15:46:27.440160 master-0 kubenswrapper[26425]: I0217 15:46:27.440015 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4xb95" event={"ID":"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6","Type":"ContainerDied","Data":"293b84de33728b619c3cab27a74ed1be9399bdbb19617ae86da678bc438d5b97"} Feb 17 15:46:27.441516 master-0 kubenswrapper[26425]: I0217 15:46:27.441412 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-88f2d" event={"ID":"b8d86a11-7897-4196-93bb-916b7472a6e0","Type":"ContainerStarted","Data":"b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba"} Feb 17 15:46:29.017048 master-0 kubenswrapper[26425]: I0217 15:46:29.016997 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:29.113728 master-0 kubenswrapper[26425]: I0217 15:46:29.113670 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.113966 master-0 kubenswrapper[26425]: I0217 15:46:29.113840 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.114070 master-0 kubenswrapper[26425]: I0217 15:46:29.114044 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.114725 master-0 kubenswrapper[26425]: I0217 15:46:29.114678 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.114854 master-0 kubenswrapper[26425]: I0217 15:46:29.114681 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:29.114920 master-0 kubenswrapper[26425]: I0217 15:46:29.114875 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.114971 master-0 kubenswrapper[26425]: I0217 15:46:29.114927 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc7dx\" (UniqueName: \"kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.115032 master-0 kubenswrapper[26425]: I0217 15:46:29.115005 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts\") pod \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\" (UID: \"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6\") " Feb 17 15:46:29.115822 master-0 kubenswrapper[26425]: I0217 15:46:29.115781 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:46:29.115981 master-0 kubenswrapper[26425]: I0217 15:46:29.115951 26425 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.115981 master-0 kubenswrapper[26425]: I0217 15:46:29.115978 26425 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.120200 master-0 kubenswrapper[26425]: I0217 15:46:29.120117 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:29.122545 master-0 kubenswrapper[26425]: I0217 15:46:29.122290 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx" (OuterVolumeSpecName: "kube-api-access-zc7dx") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "kube-api-access-zc7dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:29.149963 master-0 kubenswrapper[26425]: I0217 15:46:29.149846 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:29.152132 master-0 kubenswrapper[26425]: I0217 15:46:29.152064 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts" (OuterVolumeSpecName: "scripts") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:29.156724 master-0 kubenswrapper[26425]: I0217 15:46:29.156670 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" (UID: "4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:29.218788 master-0 kubenswrapper[26425]: I0217 15:46:29.218722 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc7dx\" (UniqueName: \"kubernetes.io/projected/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-kube-api-access-zc7dx\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.218788 master-0 kubenswrapper[26425]: I0217 15:46:29.218785 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.218925 master-0 kubenswrapper[26425]: I0217 15:46:29.218798 26425 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.218925 master-0 kubenswrapper[26425]: I0217 15:46:29.218810 26425 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.218925 master-0 kubenswrapper[26425]: I0217 15:46:29.218824 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:29.479503 master-0 kubenswrapper[26425]: I0217 15:46:29.478514 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4xb95" event={"ID":"4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6","Type":"ContainerDied","Data":"48f1ebe9564e81b7e47014396d38fc695f3e07999c41c63672cf2d7cf848192f"} Feb 17 15:46:29.479503 master-0 kubenswrapper[26425]: I0217 15:46:29.478585 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f1ebe9564e81b7e47014396d38fc695f3e07999c41c63672cf2d7cf848192f" Feb 17 15:46:29.479503 master-0 kubenswrapper[26425]: I0217 15:46:29.478551 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4xb95" Feb 17 15:46:29.937889 master-0 kubenswrapper[26425]: I0217 15:46:29.937815 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:29.946243 master-0 kubenswrapper[26425]: I0217 15:46:29.946164 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a9aa702-781f-4cf7-88c9-3ff414265810-etc-swift\") pod \"swift-storage-0\" (UID: \"0a9aa702-781f-4cf7-88c9-3ff414265810\") " pod="openstack/swift-storage-0" Feb 17 15:46:29.980317 master-0 kubenswrapper[26425]: I0217 15:46:29.980268 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 15:46:30.191518 master-0 kubenswrapper[26425]: I0217 15:46:30.191407 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sqtzz"] Feb 17 15:46:30.198833 master-0 kubenswrapper[26425]: I0217 15:46:30.198706 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sqtzz"] Feb 17 15:46:30.410303 master-0 kubenswrapper[26425]: I0217 15:46:30.410242 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60423849-80c7-459d-a2b2-68f9f449d899" path="/var/lib/kubelet/pods/60423849-80c7-459d-a2b2-68f9f449d899/volumes" Feb 17 15:46:30.558421 master-0 kubenswrapper[26425]: I0217 15:46:30.558353 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 15:46:30.560096 master-0 kubenswrapper[26425]: W0217 15:46:30.559793 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a9aa702_781f_4cf7_88c9_3ff414265810.slice/crio-f6c38190144f44deab98f06995880c58a632a79f171190533683edbe199a49a6 WatchSource:0}: Error finding container f6c38190144f44deab98f06995880c58a632a79f171190533683edbe199a49a6: Status 404 returned error can't find the container with id f6c38190144f44deab98f06995880c58a632a79f171190533683edbe199a49a6 Feb 17 15:46:31.504920 master-0 kubenswrapper[26425]: I0217 15:46:31.504832 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"f6c38190144f44deab98f06995880c58a632a79f171190533683edbe199a49a6"} Feb 17 15:46:32.168589 master-0 kubenswrapper[26425]: I0217 15:46:32.168536 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hdbmn" podUID="b83eed22-dd59-4e1d-91c1-fed8bead5b05" containerName="ovn-controller" probeResult="failure" output=< Feb 17 15:46:32.168589 master-0 kubenswrapper[26425]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 15:46:32.168589 master-0 kubenswrapper[26425]: > Feb 17 15:46:32.212600 master-0 kubenswrapper[26425]: I0217 15:46:32.212545 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:46:32.225342 master-0 kubenswrapper[26425]: I0217 15:46:32.224845 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fxgqd" Feb 17 15:46:32.508949 master-0 kubenswrapper[26425]: I0217 15:46:32.508868 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hdbmn-config-6g5c8"] Feb 17 15:46:32.509906 master-0 kubenswrapper[26425]: E0217 15:46:32.509880 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60423849-80c7-459d-a2b2-68f9f449d899" containerName="mariadb-account-create-update" Feb 17 15:46:32.509906 master-0 kubenswrapper[26425]: I0217 15:46:32.509907 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="60423849-80c7-459d-a2b2-68f9f449d899" containerName="mariadb-account-create-update" Feb 17 15:46:32.510004 master-0 kubenswrapper[26425]: E0217 15:46:32.509977 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" containerName="swift-ring-rebalance" Feb 17 15:46:32.510004 master-0 kubenswrapper[26425]: I0217 15:46:32.509987 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" containerName="swift-ring-rebalance" Feb 17 15:46:32.511240 master-0 kubenswrapper[26425]: I0217 15:46:32.510334 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba6a378-cf2a-4dbd-afe7-dfa73b7765a6" containerName="swift-ring-rebalance" Feb 17 15:46:32.511240 master-0 kubenswrapper[26425]: I0217 15:46:32.510413 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="60423849-80c7-459d-a2b2-68f9f449d899" containerName="mariadb-account-create-update" Feb 17 15:46:32.511384 master-0 kubenswrapper[26425]: I0217 15:46:32.511354 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.514745 master-0 kubenswrapper[26425]: I0217 15:46:32.513840 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 15:46:32.522430 master-0 kubenswrapper[26425]: I0217 15:46:32.522356 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"a7e7725f0250d7187757eabbdd32334111842020805a05d65ff56af0ccf17dd7"} Feb 17 15:46:32.522430 master-0 kubenswrapper[26425]: I0217 15:46:32.522407 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"ccf701dc0731a5a291151465f1f4a5c5b5b0bebd5cd53e6afbddcde318df3949"} Feb 17 15:46:32.527153 master-0 kubenswrapper[26425]: I0217 15:46:32.527107 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdbmn-config-6g5c8"] Feb 17 15:46:32.572870 master-0 kubenswrapper[26425]: I0217 15:46:32.572815 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.603982 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.604056 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.604084 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.604249 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krjzv\" (UniqueName: \"kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.604269 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.605477 master-0 kubenswrapper[26425]: I0217 15:46:32.604294 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705717 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krjzv\" (UniqueName: \"kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705780 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705808 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705841 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705896 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.705924 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.707591 master-0 kubenswrapper[26425]: I0217 15:46:32.706805 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.708000 master-0 kubenswrapper[26425]: I0217 15:46:32.707938 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.716794 master-0 kubenswrapper[26425]: I0217 15:46:32.716654 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.716794 master-0 kubenswrapper[26425]: I0217 15:46:32.716736 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.720472 master-0 kubenswrapper[26425]: I0217 15:46:32.719196 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.760546 master-0 kubenswrapper[26425]: I0217 15:46:32.754298 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krjzv\" (UniqueName: \"kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv\") pod \"ovn-controller-hdbmn-config-6g5c8\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:32.855483 master-0 kubenswrapper[26425]: I0217 15:46:32.854440 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:34.555289 master-0 kubenswrapper[26425]: I0217 15:46:34.555196 26425 generic.go:334] "Generic (PLEG): container finished" podID="3dc68acf-40ce-41a7-8633-6f19a9382a89" containerID="54ce7ad2c8ab9ee395e10da368db9beb35aa7ff1a0454ba6e043f985f430aef7" exitCode=0 Feb 17 15:46:34.556194 master-0 kubenswrapper[26425]: I0217 15:46:34.555329 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3dc68acf-40ce-41a7-8633-6f19a9382a89","Type":"ContainerDied","Data":"54ce7ad2c8ab9ee395e10da368db9beb35aa7ff1a0454ba6e043f985f430aef7"} Feb 17 15:46:34.561806 master-0 kubenswrapper[26425]: I0217 15:46:34.561753 26425 generic.go:334] "Generic (PLEG): container finished" podID="1f67d3cf-a7f4-4ead-9b78-4a247036b3d5" containerID="7cce890fad9e79fd12c6cbd29d474658c922e25adc1283ca569da806108401c5" exitCode=0 Feb 17 15:46:34.562018 master-0 kubenswrapper[26425]: I0217 15:46:34.561874 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5","Type":"ContainerDied","Data":"7cce890fad9e79fd12c6cbd29d474658c922e25adc1283ca569da806108401c5"} Feb 17 15:46:35.226810 master-0 kubenswrapper[26425]: I0217 15:46:35.226731 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tdkt8"] Feb 17 15:46:35.230652 master-0 kubenswrapper[26425]: I0217 15:46:35.230431 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.234363 master-0 kubenswrapper[26425]: I0217 15:46:35.234314 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 15:46:35.241213 master-0 kubenswrapper[26425]: I0217 15:46:35.238272 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tdkt8"] Feb 17 15:46:35.272741 master-0 kubenswrapper[26425]: I0217 15:46:35.267323 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mh4\" (UniqueName: \"kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.272741 master-0 kubenswrapper[26425]: I0217 15:46:35.267491 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.368911 master-0 kubenswrapper[26425]: I0217 15:46:35.368827 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52mh4\" (UniqueName: \"kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.369155 master-0 kubenswrapper[26425]: I0217 15:46:35.369094 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.370350 master-0 kubenswrapper[26425]: I0217 15:46:35.370320 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.386266 master-0 kubenswrapper[26425]: I0217 15:46:35.386233 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mh4\" (UniqueName: \"kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4\") pod \"root-account-create-update-tdkt8\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:35.577890 master-0 kubenswrapper[26425]: I0217 15:46:35.577758 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:37.174154 master-0 kubenswrapper[26425]: I0217 15:46:37.174066 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hdbmn" podUID="b83eed22-dd59-4e1d-91c1-fed8bead5b05" containerName="ovn-controller" probeResult="failure" output=< Feb 17 15:46:37.174154 master-0 kubenswrapper[26425]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 15:46:37.174154 master-0 kubenswrapper[26425]: > Feb 17 15:46:39.625666 master-0 kubenswrapper[26425]: I0217 15:46:39.625610 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f67d3cf-a7f4-4ead-9b78-4a247036b3d5","Type":"ContainerStarted","Data":"a0d82695fcbe741ed13268583d4a74ab6aea8101e61ab47d861e2f7ed93b6e5e"} Feb 17 15:46:39.627350 master-0 kubenswrapper[26425]: I0217 15:46:39.627318 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 15:46:39.628023 master-0 kubenswrapper[26425]: I0217 15:46:39.627987 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3dc68acf-40ce-41a7-8633-6f19a9382a89","Type":"ContainerStarted","Data":"2c16a371d37fb17512afe5fc79314916645e62743519abeb4fc30448afe7afde"} Feb 17 15:46:39.628683 master-0 kubenswrapper[26425]: I0217 15:46:39.628209 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:46:39.632204 master-0 kubenswrapper[26425]: I0217 15:46:39.632145 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"07f19062d82962362a79ac63f6dc275a9353bde73a56590bfc806e2b507050ed"} Feb 17 15:46:39.652388 master-0 kubenswrapper[26425]: I0217 15:46:39.652134 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tdkt8"] Feb 17 15:46:39.664645 master-0 kubenswrapper[26425]: I0217 15:46:39.664590 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hdbmn-config-6g5c8"] Feb 17 15:46:40.010696 master-0 kubenswrapper[26425]: I0217 15:46:40.010108 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 15:46:40.115109 master-0 kubenswrapper[26425]: I0217 15:46:40.114886 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=67.949014029 podStartE2EDuration="1m18.114865323s" podCreationTimestamp="2026-02-17 15:45:22 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.265738292 +0000 UTC m=+1812.157462130" lastFinishedPulling="2026-02-17 15:46:00.431589606 +0000 UTC m=+1822.323313424" observedRunningTime="2026-02-17 15:46:40.104203027 +0000 UTC m=+1861.995926865" watchObservedRunningTime="2026-02-17 15:46:40.114865323 +0000 UTC m=+1862.006589141" Feb 17 15:46:40.283346 master-0 kubenswrapper[26425]: I0217 15:46:40.282574 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=69.415368033 podStartE2EDuration="1m19.282552744s" podCreationTimestamp="2026-02-17 15:45:21 +0000 UTC" firstStartedPulling="2026-02-17 15:45:50.274680387 +0000 UTC m=+1812.166404205" lastFinishedPulling="2026-02-17 15:46:00.141865098 +0000 UTC m=+1822.033588916" observedRunningTime="2026-02-17 15:46:40.276390166 +0000 UTC m=+1862.168114004" watchObservedRunningTime="2026-02-17 15:46:40.282552744 +0000 UTC m=+1862.174276562" Feb 17 15:46:40.647419 master-0 kubenswrapper[26425]: I0217 15:46:40.647340 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"62f8b3cb12a390a8bd5ec1e2d2fc5919d62399c0956abce6959f5176facdd47e"} Feb 17 15:46:40.650064 master-0 kubenswrapper[26425]: I0217 15:46:40.649986 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn-config-6g5c8" event={"ID":"9ab6cc21-a2b1-4314-abf8-722c0be09ee9","Type":"ContainerStarted","Data":"248b53655b8165ff426335a7dbd83c2d2e53c153789a0b0a2017cae3709af50a"} Feb 17 15:46:40.650158 master-0 kubenswrapper[26425]: I0217 15:46:40.650090 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn-config-6g5c8" event={"ID":"9ab6cc21-a2b1-4314-abf8-722c0be09ee9","Type":"ContainerStarted","Data":"783abac86f3fcc756d66645c1e5314ff19eecec45cd5d2980871d2befd4994c1"} Feb 17 15:46:40.653028 master-0 kubenswrapper[26425]: I0217 15:46:40.652978 26425 generic.go:334] "Generic (PLEG): container finished" podID="3d1286c3-3a70-4281-bbae-80511edc3742" containerID="cc3105b0256c4fba6875d29ef73613e744867a8acbff04d06d1deddfc3802b54" exitCode=0 Feb 17 15:46:40.653128 master-0 kubenswrapper[26425]: I0217 15:46:40.653023 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tdkt8" event={"ID":"3d1286c3-3a70-4281-bbae-80511edc3742","Type":"ContainerDied","Data":"cc3105b0256c4fba6875d29ef73613e744867a8acbff04d06d1deddfc3802b54"} Feb 17 15:46:40.653128 master-0 kubenswrapper[26425]: I0217 15:46:40.653077 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tdkt8" event={"ID":"3d1286c3-3a70-4281-bbae-80511edc3742","Type":"ContainerStarted","Data":"8dcf849f307722165a4c9c9524c132abc8426da5185306edf97c80849adedebc"} Feb 17 15:46:40.678886 master-0 kubenswrapper[26425]: I0217 15:46:40.678813 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hdbmn-config-6g5c8" podStartSLOduration=8.678793967 podStartE2EDuration="8.678793967s" podCreationTimestamp="2026-02-17 15:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:40.666235036 +0000 UTC m=+1862.557958874" watchObservedRunningTime="2026-02-17 15:46:40.678793967 +0000 UTC m=+1862.570517785" Feb 17 15:46:41.701502 master-0 kubenswrapper[26425]: I0217 15:46:41.701386 26425 generic.go:334] "Generic (PLEG): container finished" podID="9ab6cc21-a2b1-4314-abf8-722c0be09ee9" containerID="248b53655b8165ff426335a7dbd83c2d2e53c153789a0b0a2017cae3709af50a" exitCode=0 Feb 17 15:46:41.702383 master-0 kubenswrapper[26425]: I0217 15:46:41.701620 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn-config-6g5c8" event={"ID":"9ab6cc21-a2b1-4314-abf8-722c0be09ee9","Type":"ContainerDied","Data":"248b53655b8165ff426335a7dbd83c2d2e53c153789a0b0a2017cae3709af50a"} Feb 17 15:46:41.704304 master-0 kubenswrapper[26425]: I0217 15:46:41.704237 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-88f2d" event={"ID":"b8d86a11-7897-4196-93bb-916b7472a6e0","Type":"ContainerStarted","Data":"794618c6172b3a35078743fa3aa977e50d16860b106a0b47f63fa9f15f882539"} Feb 17 15:46:41.753255 master-0 kubenswrapper[26425]: I0217 15:46:41.753101 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-88f2d" podStartSLOduration=2.9467777550000003 podStartE2EDuration="15.753066299s" podCreationTimestamp="2026-02-17 15:46:26 +0000 UTC" firstStartedPulling="2026-02-17 15:46:27.280748081 +0000 UTC m=+1849.172471899" lastFinishedPulling="2026-02-17 15:46:40.087036585 +0000 UTC m=+1861.978760443" observedRunningTime="2026-02-17 15:46:41.746398729 +0000 UTC m=+1863.638122567" watchObservedRunningTime="2026-02-17 15:46:41.753066299 +0000 UTC m=+1863.644790177" Feb 17 15:46:42.187299 master-0 kubenswrapper[26425]: I0217 15:46:42.187226 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hdbmn" Feb 17 15:46:42.229482 master-0 kubenswrapper[26425]: I0217 15:46:42.226969 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:42.361526 master-0 kubenswrapper[26425]: I0217 15:46:42.361436 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts\") pod \"3d1286c3-3a70-4281-bbae-80511edc3742\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " Feb 17 15:46:42.361526 master-0 kubenswrapper[26425]: I0217 15:46:42.361513 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mh4\" (UniqueName: \"kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4\") pod \"3d1286c3-3a70-4281-bbae-80511edc3742\" (UID: \"3d1286c3-3a70-4281-bbae-80511edc3742\") " Feb 17 15:46:42.361905 master-0 kubenswrapper[26425]: I0217 15:46:42.361820 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d1286c3-3a70-4281-bbae-80511edc3742" (UID: "3d1286c3-3a70-4281-bbae-80511edc3742"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:42.362517 master-0 kubenswrapper[26425]: I0217 15:46:42.362463 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1286c3-3a70-4281-bbae-80511edc3742-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:42.364775 master-0 kubenswrapper[26425]: I0217 15:46:42.364230 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4" (OuterVolumeSpecName: "kube-api-access-52mh4") pod "3d1286c3-3a70-4281-bbae-80511edc3742" (UID: "3d1286c3-3a70-4281-bbae-80511edc3742"). InnerVolumeSpecName "kube-api-access-52mh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:42.464539 master-0 kubenswrapper[26425]: I0217 15:46:42.464445 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52mh4\" (UniqueName: \"kubernetes.io/projected/3d1286c3-3a70-4281-bbae-80511edc3742-kube-api-access-52mh4\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:42.715914 master-0 kubenswrapper[26425]: I0217 15:46:42.715830 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tdkt8" event={"ID":"3d1286c3-3a70-4281-bbae-80511edc3742","Type":"ContainerDied","Data":"8dcf849f307722165a4c9c9524c132abc8426da5185306edf97c80849adedebc"} Feb 17 15:46:42.716670 master-0 kubenswrapper[26425]: I0217 15:46:42.715926 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dcf849f307722165a4c9c9524c132abc8426da5185306edf97c80849adedebc" Feb 17 15:46:42.716670 master-0 kubenswrapper[26425]: I0217 15:46:42.715853 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tdkt8" Feb 17 15:46:42.720545 master-0 kubenswrapper[26425]: I0217 15:46:42.720494 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"822f036575cc1e6a140f36edb744bf7c12aa89c8ea4dee060c80dff94c3d5e6a"} Feb 17 15:46:42.720701 master-0 kubenswrapper[26425]: I0217 15:46:42.720557 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"f6e50b205d59d90e5f6e09a969f781f391cec59be5887faa7d8b0b3c8acdcd73"} Feb 17 15:46:42.720701 master-0 kubenswrapper[26425]: I0217 15:46:42.720578 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"5498b22d97b94a6f31562b8889cffecff61d8390c96a952c3dc4aa9ddf569bc3"} Feb 17 15:46:43.169371 master-0 kubenswrapper[26425]: I0217 15:46:43.169332 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:43.284188 master-0 kubenswrapper[26425]: I0217 15:46:43.284098 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.284188 master-0 kubenswrapper[26425]: I0217 15:46:43.284198 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284209 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284315 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284381 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284404 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284473 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run" (OuterVolumeSpecName: "var-run") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284518 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.284660 master-0 kubenswrapper[26425]: I0217 15:46:43.284575 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krjzv\" (UniqueName: \"kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv\") pod \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\" (UID: \"9ab6cc21-a2b1-4314-abf8-722c0be09ee9\") " Feb 17 15:46:43.285108 master-0 kubenswrapper[26425]: I0217 15:46:43.285082 26425 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.285108 master-0 kubenswrapper[26425]: I0217 15:46:43.285107 26425 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.285246 master-0 kubenswrapper[26425]: I0217 15:46:43.285121 26425 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.285246 master-0 kubenswrapper[26425]: I0217 15:46:43.285114 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:43.286691 master-0 kubenswrapper[26425]: I0217 15:46:43.285890 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts" (OuterVolumeSpecName: "scripts") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:43.288113 master-0 kubenswrapper[26425]: I0217 15:46:43.288022 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv" (OuterVolumeSpecName: "kube-api-access-krjzv") pod "9ab6cc21-a2b1-4314-abf8-722c0be09ee9" (UID: "9ab6cc21-a2b1-4314-abf8-722c0be09ee9"). InnerVolumeSpecName "kube-api-access-krjzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:43.387671 master-0 kubenswrapper[26425]: I0217 15:46:43.387547 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krjzv\" (UniqueName: \"kubernetes.io/projected/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-kube-api-access-krjzv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.387671 master-0 kubenswrapper[26425]: I0217 15:46:43.387612 26425 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.387671 master-0 kubenswrapper[26425]: I0217 15:46:43.387632 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab6cc21-a2b1-4314-abf8-722c0be09ee9-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:43.742101 master-0 kubenswrapper[26425]: I0217 15:46:43.741636 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"35e3a0d627ab0f529f2040053f284871f7a604f0b23b09eb261e8fd0595bfeb9"} Feb 17 15:46:43.743814 master-0 kubenswrapper[26425]: I0217 15:46:43.743760 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hdbmn-config-6g5c8" event={"ID":"9ab6cc21-a2b1-4314-abf8-722c0be09ee9","Type":"ContainerDied","Data":"783abac86f3fcc756d66645c1e5314ff19eecec45cd5d2980871d2befd4994c1"} Feb 17 15:46:43.743919 master-0 kubenswrapper[26425]: I0217 15:46:43.743825 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783abac86f3fcc756d66645c1e5314ff19eecec45cd5d2980871d2befd4994c1" Feb 17 15:46:43.743984 master-0 kubenswrapper[26425]: I0217 15:46:43.743930 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hdbmn-config-6g5c8" Feb 17 15:46:44.136308 master-0 kubenswrapper[26425]: I0217 15:46:44.136216 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hdbmn-config-6g5c8"] Feb 17 15:46:44.155749 master-0 kubenswrapper[26425]: I0217 15:46:44.155674 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hdbmn-config-6g5c8"] Feb 17 15:46:44.409146 master-0 kubenswrapper[26425]: I0217 15:46:44.409002 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab6cc21-a2b1-4314-abf8-722c0be09ee9" path="/var/lib/kubelet/pods/9ab6cc21-a2b1-4314-abf8-722c0be09ee9/volumes" Feb 17 15:46:45.781569 master-0 kubenswrapper[26425]: I0217 15:46:45.781520 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"4b1d1c6b3941b696d30bb785a7af7c58e02bcf5e170c3874a51c733fbdb54a20"} Feb 17 15:46:45.781569 master-0 kubenswrapper[26425]: I0217 15:46:45.781570 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"5a1c62b575abea29810dc8a95f480a6bd9aeae0e1d0e17a3b7cbcd3f19eafee6"} Feb 17 15:46:45.782591 master-0 kubenswrapper[26425]: I0217 15:46:45.781582 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"b59595b8c0231eab690c9d8c3ff88fb1a4849625376c039bc1a21819f124b811"} Feb 17 15:46:46.798207 master-0 kubenswrapper[26425]: I0217 15:46:46.798048 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"d384c85fe4b1bb94ba25f1607519aca6d180d5b6984299100be558fab155fb4f"} Feb 17 15:46:46.798207 master-0 kubenswrapper[26425]: I0217 15:46:46.798120 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"18dce042e783bccefbf6ea2c5b70599c72b332023359e5cb727f2f3f83137d91"} Feb 17 15:46:46.798207 master-0 kubenswrapper[26425]: I0217 15:46:46.798134 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"bf27000568dda64aebfb0d89e8d6a396aabba5a9d04af3c0344e6d4e9633e8eb"} Feb 17 15:46:46.798207 master-0 kubenswrapper[26425]: I0217 15:46:46.798146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0a9aa702-781f-4cf7-88c9-3ff414265810","Type":"ContainerStarted","Data":"99d45c71c8d37e8c0393c5adbf526b632555585ad4ab9052b50c9b0343008484"} Feb 17 15:46:46.845816 master-0 kubenswrapper[26425]: I0217 15:46:46.845721 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.807352917 podStartE2EDuration="35.845697889s" podCreationTimestamp="2026-02-17 15:46:11 +0000 UTC" firstStartedPulling="2026-02-17 15:46:30.563772272 +0000 UTC m=+1852.455496090" lastFinishedPulling="2026-02-17 15:46:44.602117254 +0000 UTC m=+1866.493841062" observedRunningTime="2026-02-17 15:46:46.838126128 +0000 UTC m=+1868.729849956" watchObservedRunningTime="2026-02-17 15:46:46.845697889 +0000 UTC m=+1868.737421707" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: I0217 15:46:47.161534 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: E0217 15:46:47.162046 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1286c3-3a70-4281-bbae-80511edc3742" containerName="mariadb-account-create-update" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: I0217 15:46:47.162059 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1286c3-3a70-4281-bbae-80511edc3742" containerName="mariadb-account-create-update" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: E0217 15:46:47.162122 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab6cc21-a2b1-4314-abf8-722c0be09ee9" containerName="ovn-config" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: I0217 15:46:47.162128 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab6cc21-a2b1-4314-abf8-722c0be09ee9" containerName="ovn-config" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: I0217 15:46:47.162334 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab6cc21-a2b1-4314-abf8-722c0be09ee9" containerName="ovn-config" Feb 17 15:46:47.162481 master-0 kubenswrapper[26425]: I0217 15:46:47.162369 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1286c3-3a70-4281-bbae-80511edc3742" containerName="mariadb-account-create-update" Feb 17 15:46:47.167481 master-0 kubenswrapper[26425]: I0217 15:46:47.163514 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.167481 master-0 kubenswrapper[26425]: I0217 15:46:47.166893 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 15:46:47.175478 master-0 kubenswrapper[26425]: I0217 15:46:47.173387 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:47.273596 master-0 kubenswrapper[26425]: I0217 15:46:47.273508 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.273596 master-0 kubenswrapper[26425]: I0217 15:46:47.273596 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6bq\" (UniqueName: \"kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.273982 master-0 kubenswrapper[26425]: I0217 15:46:47.273932 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.274041 master-0 kubenswrapper[26425]: I0217 15:46:47.274010 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.274340 master-0 kubenswrapper[26425]: I0217 15:46:47.274297 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.274404 master-0 kubenswrapper[26425]: I0217 15:46:47.274388 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.376234 master-0 kubenswrapper[26425]: I0217 15:46:47.376187 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc6bq\" (UniqueName: \"kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.376420 master-0 kubenswrapper[26425]: I0217 15:46:47.376310 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.376420 master-0 kubenswrapper[26425]: I0217 15:46:47.376331 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.376634 master-0 kubenswrapper[26425]: I0217 15:46:47.376609 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.376674 master-0 kubenswrapper[26425]: I0217 15:46:47.376653 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.377896 master-0 kubenswrapper[26425]: I0217 15:46:47.377473 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.377950 master-0 kubenswrapper[26425]: I0217 15:46:47.377492 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.377950 master-0 kubenswrapper[26425]: I0217 15:46:47.377866 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.378039 master-0 kubenswrapper[26425]: I0217 15:46:47.378017 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.378104 master-0 kubenswrapper[26425]: I0217 15:46:47.378084 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.378794 master-0 kubenswrapper[26425]: I0217 15:46:47.378747 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.398582 master-0 kubenswrapper[26425]: I0217 15:46:47.397676 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc6bq\" (UniqueName: \"kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq\") pod \"dnsmasq-dns-67dc4d787c-m7s4w\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.506814 master-0 kubenswrapper[26425]: I0217 15:46:47.506706 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:47.821961 master-0 kubenswrapper[26425]: I0217 15:46:47.816094 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:47.842733 master-0 kubenswrapper[26425]: W0217 15:46:47.834731 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35c74619_8f42_4b4a_91eb_8343d4467669.slice/crio-8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1 WatchSource:0}: Error finding container 8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1: Status 404 returned error can't find the container with id 8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1 Feb 17 15:46:48.859240 master-0 kubenswrapper[26425]: I0217 15:46:48.859175 26425 generic.go:334] "Generic (PLEG): container finished" podID="35c74619-8f42-4b4a-91eb-8343d4467669" containerID="8f6277f9c9e0d8841862f39aa72e8def1d54619c9594f69e93e2ca109209a7be" exitCode=0 Feb 17 15:46:48.859240 master-0 kubenswrapper[26425]: I0217 15:46:48.859227 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" event={"ID":"35c74619-8f42-4b4a-91eb-8343d4467669","Type":"ContainerDied","Data":"8f6277f9c9e0d8841862f39aa72e8def1d54619c9594f69e93e2ca109209a7be"} Feb 17 15:46:48.859240 master-0 kubenswrapper[26425]: I0217 15:46:48.859253 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" event={"ID":"35c74619-8f42-4b4a-91eb-8343d4467669","Type":"ContainerStarted","Data":"8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1"} Feb 17 15:46:49.133818 master-0 kubenswrapper[26425]: I0217 15:46:49.133739 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 15:46:49.462133 master-0 kubenswrapper[26425]: I0217 15:46:49.462055 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5fmzp"] Feb 17 15:46:49.464139 master-0 kubenswrapper[26425]: I0217 15:46:49.464105 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.480072 master-0 kubenswrapper[26425]: I0217 15:46:49.480036 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5fmzp"] Feb 17 15:46:49.539637 master-0 kubenswrapper[26425]: I0217 15:46:49.539569 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krkst\" (UniqueName: \"kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.539961 master-0 kubenswrapper[26425]: I0217 15:46:49.539921 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.555982 master-0 kubenswrapper[26425]: I0217 15:46:49.555929 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-be98-account-create-update-ccwpm"] Feb 17 15:46:49.557412 master-0 kubenswrapper[26425]: I0217 15:46:49.557383 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.559641 master-0 kubenswrapper[26425]: I0217 15:46:49.559356 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 15:46:49.573088 master-0 kubenswrapper[26425]: I0217 15:46:49.573045 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-be98-account-create-update-ccwpm"] Feb 17 15:46:49.649112 master-0 kubenswrapper[26425]: I0217 15:46:49.649037 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.649321 master-0 kubenswrapper[26425]: I0217 15:46:49.649176 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krkst\" (UniqueName: \"kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.649321 master-0 kubenswrapper[26425]: I0217 15:46:49.649210 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.649503 master-0 kubenswrapper[26425]: I0217 15:46:49.649464 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqwcw\" (UniqueName: \"kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.650212 master-0 kubenswrapper[26425]: I0217 15:46:49.650179 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.665209 master-0 kubenswrapper[26425]: I0217 15:46:49.665098 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krkst\" (UniqueName: \"kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst\") pod \"cinder-db-create-5fmzp\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.750624 master-0 kubenswrapper[26425]: I0217 15:46:49.750549 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-g9g6p"] Feb 17 15:46:49.751730 master-0 kubenswrapper[26425]: I0217 15:46:49.751679 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.752349 master-0 kubenswrapper[26425]: I0217 15:46:49.752289 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:49.752536 master-0 kubenswrapper[26425]: I0217 15:46:49.752474 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.752689 master-0 kubenswrapper[26425]: I0217 15:46:49.752540 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqwcw\" (UniqueName: \"kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.762197 master-0 kubenswrapper[26425]: I0217 15:46:49.762131 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g9g6p"] Feb 17 15:46:49.777765 master-0 kubenswrapper[26425]: I0217 15:46:49.777724 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqwcw\" (UniqueName: \"kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw\") pod \"cinder-be98-account-create-update-ccwpm\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.796592 master-0 kubenswrapper[26425]: I0217 15:46:49.793708 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:49.842967 master-0 kubenswrapper[26425]: I0217 15:46:49.842902 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dqtpw"] Feb 17 15:46:49.844498 master-0 kubenswrapper[26425]: I0217 15:46:49.844474 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:49.851321 master-0 kubenswrapper[26425]: I0217 15:46:49.850578 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 15:46:49.851321 master-0 kubenswrapper[26425]: I0217 15:46:49.850902 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 15:46:49.851321 master-0 kubenswrapper[26425]: I0217 15:46:49.851034 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 15:46:49.855059 master-0 kubenswrapper[26425]: I0217 15:46:49.854998 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fzl\" (UniqueName: \"kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:49.855059 master-0 kubenswrapper[26425]: I0217 15:46:49.855055 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:49.863720 master-0 kubenswrapper[26425]: I0217 15:46:49.863675 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dqtpw"] Feb 17 15:46:49.873083 master-0 kubenswrapper[26425]: I0217 15:46:49.872874 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" event={"ID":"35c74619-8f42-4b4a-91eb-8343d4467669","Type":"ContainerStarted","Data":"0782b30e859b5fc0407cb775aa4db0fa1dc3026be61690452f75aad0ea7e56c4"} Feb 17 15:46:49.873285 master-0 kubenswrapper[26425]: I0217 15:46:49.873086 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:49.876205 master-0 kubenswrapper[26425]: I0217 15:46:49.875658 26425 generic.go:334] "Generic (PLEG): container finished" podID="b8d86a11-7897-4196-93bb-916b7472a6e0" containerID="794618c6172b3a35078743fa3aa977e50d16860b106a0b47f63fa9f15f882539" exitCode=0 Feb 17 15:46:49.876205 master-0 kubenswrapper[26425]: I0217 15:46:49.875697 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-88f2d" event={"ID":"b8d86a11-7897-4196-93bb-916b7472a6e0","Type":"ContainerDied","Data":"794618c6172b3a35078743fa3aa977e50d16860b106a0b47f63fa9f15f882539"} Feb 17 15:46:49.877952 master-0 kubenswrapper[26425]: I0217 15:46:49.877916 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-406d-account-create-update-qv9dz"] Feb 17 15:46:49.879197 master-0 kubenswrapper[26425]: I0217 15:46:49.879172 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:49.885707 master-0 kubenswrapper[26425]: I0217 15:46:49.882765 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 15:46:49.915918 master-0 kubenswrapper[26425]: I0217 15:46:49.915629 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-406d-account-create-update-qv9dz"] Feb 17 15:46:49.918198 master-0 kubenswrapper[26425]: I0217 15:46:49.918034 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:49.931278 master-0 kubenswrapper[26425]: I0217 15:46:49.931023 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" podStartSLOduration=2.931004492 podStartE2EDuration="2.931004492s" podCreationTimestamp="2026-02-17 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:49.889281621 +0000 UTC m=+1871.781005459" watchObservedRunningTime="2026-02-17 15:46:49.931004492 +0000 UTC m=+1871.822728300" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975736 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975808 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fzl\" (UniqueName: \"kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975839 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975866 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975885 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.975955 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vhn\" (UniqueName: \"kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:49.976523 master-0 kubenswrapper[26425]: I0217 15:46:49.976006 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgqp\" (UniqueName: \"kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:49.977320 master-0 kubenswrapper[26425]: I0217 15:46:49.977165 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:50.073327 master-0 kubenswrapper[26425]: I0217 15:46:50.073288 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fzl\" (UniqueName: \"kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl\") pod \"neutron-db-create-g9g6p\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:50.089493 master-0 kubenswrapper[26425]: I0217 15:46:50.077681 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:50.089493 master-0 kubenswrapper[26425]: I0217 15:46:50.077753 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.089493 master-0 kubenswrapper[26425]: I0217 15:46:50.077830 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vhn\" (UniqueName: \"kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.089493 master-0 kubenswrapper[26425]: I0217 15:46:50.077895 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrgqp\" (UniqueName: \"kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:50.089493 master-0 kubenswrapper[26425]: I0217 15:46:50.078050 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.090184 master-0 kubenswrapper[26425]: I0217 15:46:50.090143 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.090649 master-0 kubenswrapper[26425]: I0217 15:46:50.090615 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:50.091479 master-0 kubenswrapper[26425]: I0217 15:46:50.091421 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.091732 master-0 kubenswrapper[26425]: I0217 15:46:50.091705 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:50.142564 master-0 kubenswrapper[26425]: I0217 15:46:50.141439 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vhn\" (UniqueName: \"kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn\") pod \"keystone-db-sync-dqtpw\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.192553 master-0 kubenswrapper[26425]: I0217 15:46:50.178908 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrgqp\" (UniqueName: \"kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp\") pod \"neutron-406d-account-create-update-qv9dz\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:50.237762 master-0 kubenswrapper[26425]: I0217 15:46:50.237683 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:46:50.351168 master-0 kubenswrapper[26425]: I0217 15:46:50.344506 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:50.357794 master-0 kubenswrapper[26425]: I0217 15:46:50.357735 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5fmzp"] Feb 17 15:46:50.593388 master-0 kubenswrapper[26425]: I0217 15:46:50.593326 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-be98-account-create-update-ccwpm"] Feb 17 15:46:50.800880 master-0 kubenswrapper[26425]: W0217 15:46:50.797273 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5d5e735_50f8_40f8_b410_7bf5d95fadc4.slice/crio-547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471 WatchSource:0}: Error finding container 547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471: Status 404 returned error can't find the container with id 547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471 Feb 17 15:46:50.802291 master-0 kubenswrapper[26425]: I0217 15:46:50.802181 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g9g6p"] Feb 17 15:46:50.815588 master-0 kubenswrapper[26425]: I0217 15:46:50.815531 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dqtpw"] Feb 17 15:46:50.905006 master-0 kubenswrapper[26425]: I0217 15:46:50.904951 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g9g6p" event={"ID":"733a610a-8f50-42dd-b159-6fd6a8959971","Type":"ContainerStarted","Data":"b99fe484ab8385b20d7906f35ed06a2907f9cc83855d5cf135061bb2e61cadaf"} Feb 17 15:46:50.908743 master-0 kubenswrapper[26425]: I0217 15:46:50.908661 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-be98-account-create-update-ccwpm" event={"ID":"aa34f651-bd45-4add-b97d-8e5194d3edf0","Type":"ContainerStarted","Data":"31f2a6139bed35247d0a6a1a5a552b455cd60d3a87b66f4b614590716ba8f863"} Feb 17 15:46:50.909030 master-0 kubenswrapper[26425]: I0217 15:46:50.909012 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-be98-account-create-update-ccwpm" event={"ID":"aa34f651-bd45-4add-b97d-8e5194d3edf0","Type":"ContainerStarted","Data":"16868686575f9b92e0721c57315ba74ae58d87b6d2699db850bbecabb61e9a1b"} Feb 17 15:46:50.912302 master-0 kubenswrapper[26425]: I0217 15:46:50.911111 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dqtpw" event={"ID":"a5d5e735-50f8-40f8-b410-7bf5d95fadc4","Type":"ContainerStarted","Data":"547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471"} Feb 17 15:46:50.914318 master-0 kubenswrapper[26425]: I0217 15:46:50.912618 26425 generic.go:334] "Generic (PLEG): container finished" podID="cd12849e-ca95-4cf1-9374-46d1c8d4874b" containerID="c18bde93643a192a077a7501dcab7eb4d7e938b2b97de7ab6e3d53fe8f9d7add" exitCode=0 Feb 17 15:46:50.914318 master-0 kubenswrapper[26425]: I0217 15:46:50.913275 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5fmzp" event={"ID":"cd12849e-ca95-4cf1-9374-46d1c8d4874b","Type":"ContainerDied","Data":"c18bde93643a192a077a7501dcab7eb4d7e938b2b97de7ab6e3d53fe8f9d7add"} Feb 17 15:46:50.914318 master-0 kubenswrapper[26425]: I0217 15:46:50.913300 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5fmzp" event={"ID":"cd12849e-ca95-4cf1-9374-46d1c8d4874b","Type":"ContainerStarted","Data":"8a92da21e534787332e1ab6f1bfe9483565192db9ad8d799de40b7994c74396f"} Feb 17 15:46:50.944369 master-0 kubenswrapper[26425]: I0217 15:46:50.943427 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-be98-account-create-update-ccwpm" podStartSLOduration=1.94340823 podStartE2EDuration="1.94340823s" podCreationTimestamp="2026-02-17 15:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:50.942952749 +0000 UTC m=+1872.834676587" watchObservedRunningTime="2026-02-17 15:46:50.94340823 +0000 UTC m=+1872.835132048" Feb 17 15:46:51.011950 master-0 kubenswrapper[26425]: W0217 15:46:51.011897 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1197573_c94d_4d3c_9cd1_01b65ef0ec42.slice/crio-e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505 WatchSource:0}: Error finding container e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505: Status 404 returned error can't find the container with id e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505 Feb 17 15:46:51.041896 master-0 kubenswrapper[26425]: I0217 15:46:51.041781 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-406d-account-create-update-qv9dz"] Feb 17 15:46:51.486044 master-0 kubenswrapper[26425]: I0217 15:46:51.485706 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:51.627313 master-0 kubenswrapper[26425]: I0217 15:46:51.627251 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data\") pod \"b8d86a11-7897-4196-93bb-916b7472a6e0\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " Feb 17 15:46:51.627556 master-0 kubenswrapper[26425]: I0217 15:46:51.627333 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data\") pod \"b8d86a11-7897-4196-93bb-916b7472a6e0\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " Feb 17 15:46:51.627556 master-0 kubenswrapper[26425]: I0217 15:46:51.627361 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle\") pod \"b8d86a11-7897-4196-93bb-916b7472a6e0\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " Feb 17 15:46:51.627897 master-0 kubenswrapper[26425]: I0217 15:46:51.627846 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sksdh\" (UniqueName: \"kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh\") pod \"b8d86a11-7897-4196-93bb-916b7472a6e0\" (UID: \"b8d86a11-7897-4196-93bb-916b7472a6e0\") " Feb 17 15:46:51.630666 master-0 kubenswrapper[26425]: I0217 15:46:51.630596 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b8d86a11-7897-4196-93bb-916b7472a6e0" (UID: "b8d86a11-7897-4196-93bb-916b7472a6e0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:51.630997 master-0 kubenswrapper[26425]: I0217 15:46:51.630961 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh" (OuterVolumeSpecName: "kube-api-access-sksdh") pod "b8d86a11-7897-4196-93bb-916b7472a6e0" (UID: "b8d86a11-7897-4196-93bb-916b7472a6e0"). InnerVolumeSpecName "kube-api-access-sksdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:51.661854 master-0 kubenswrapper[26425]: I0217 15:46:51.661800 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8d86a11-7897-4196-93bb-916b7472a6e0" (UID: "b8d86a11-7897-4196-93bb-916b7472a6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:51.700184 master-0 kubenswrapper[26425]: I0217 15:46:51.700125 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data" (OuterVolumeSpecName: "config-data") pod "b8d86a11-7897-4196-93bb-916b7472a6e0" (UID: "b8d86a11-7897-4196-93bb-916b7472a6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:46:51.729962 master-0 kubenswrapper[26425]: I0217 15:46:51.729885 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sksdh\" (UniqueName: \"kubernetes.io/projected/b8d86a11-7897-4196-93bb-916b7472a6e0-kube-api-access-sksdh\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:51.729962 master-0 kubenswrapper[26425]: I0217 15:46:51.729932 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:51.729962 master-0 kubenswrapper[26425]: I0217 15:46:51.729944 26425 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:51.729962 master-0 kubenswrapper[26425]: I0217 15:46:51.729954 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d86a11-7897-4196-93bb-916b7472a6e0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:51.931804 master-0 kubenswrapper[26425]: I0217 15:46:51.931761 26425 generic.go:334] "Generic (PLEG): container finished" podID="733a610a-8f50-42dd-b159-6fd6a8959971" containerID="01d846df3825403741424ac8d5b758b4d902dac12f4306f3c9e14b5b2d1cb982" exitCode=0 Feb 17 15:46:51.932209 master-0 kubenswrapper[26425]: I0217 15:46:51.931823 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g9g6p" event={"ID":"733a610a-8f50-42dd-b159-6fd6a8959971","Type":"ContainerDied","Data":"01d846df3825403741424ac8d5b758b4d902dac12f4306f3c9e14b5b2d1cb982"} Feb 17 15:46:51.938758 master-0 kubenswrapper[26425]: I0217 15:46:51.938706 26425 generic.go:334] "Generic (PLEG): container finished" podID="aa34f651-bd45-4add-b97d-8e5194d3edf0" containerID="31f2a6139bed35247d0a6a1a5a552b455cd60d3a87b66f4b614590716ba8f863" exitCode=0 Feb 17 15:46:51.938990 master-0 kubenswrapper[26425]: I0217 15:46:51.938869 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-be98-account-create-update-ccwpm" event={"ID":"aa34f651-bd45-4add-b97d-8e5194d3edf0","Type":"ContainerDied","Data":"31f2a6139bed35247d0a6a1a5a552b455cd60d3a87b66f4b614590716ba8f863"} Feb 17 15:46:51.951922 master-0 kubenswrapper[26425]: I0217 15:46:51.951866 26425 generic.go:334] "Generic (PLEG): container finished" podID="e1197573-c94d-4d3c-9cd1-01b65ef0ec42" containerID="1171e5fc788282f957d4efe533b462c16525a08e6b73dd4360a7fc8e3081d216" exitCode=0 Feb 17 15:46:51.952067 master-0 kubenswrapper[26425]: I0217 15:46:51.951963 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-406d-account-create-update-qv9dz" event={"ID":"e1197573-c94d-4d3c-9cd1-01b65ef0ec42","Type":"ContainerDied","Data":"1171e5fc788282f957d4efe533b462c16525a08e6b73dd4360a7fc8e3081d216"} Feb 17 15:46:51.952067 master-0 kubenswrapper[26425]: I0217 15:46:51.952054 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-406d-account-create-update-qv9dz" event={"ID":"e1197573-c94d-4d3c-9cd1-01b65ef0ec42","Type":"ContainerStarted","Data":"e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505"} Feb 17 15:46:51.954472 master-0 kubenswrapper[26425]: I0217 15:46:51.954420 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-88f2d" Feb 17 15:46:51.955646 master-0 kubenswrapper[26425]: I0217 15:46:51.955605 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-88f2d" event={"ID":"b8d86a11-7897-4196-93bb-916b7472a6e0","Type":"ContainerDied","Data":"b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba"} Feb 17 15:46:51.955712 master-0 kubenswrapper[26425]: I0217 15:46:51.955646 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8848870af1b7c03deba06d9ffc87a5e7829c67eed1989651426e58f4ad6c4ba" Feb 17 15:46:52.447228 master-0 kubenswrapper[26425]: I0217 15:46:52.447174 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.477086 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.477142 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.477428 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="dnsmasq-dns" containerID="cri-o://0782b30e859b5fc0407cb775aa4db0fa1dc3026be61690452f75aad0ea7e56c4" gracePeriod=10 Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: E0217 15:46:52.478018 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12849e-ca95-4cf1-9374-46d1c8d4874b" containerName="mariadb-database-create" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.478055 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12849e-ca95-4cf1-9374-46d1c8d4874b" containerName="mariadb-database-create" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: E0217 15:46:52.478073 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d86a11-7897-4196-93bb-916b7472a6e0" containerName="glance-db-sync" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.478081 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d86a11-7897-4196-93bb-916b7472a6e0" containerName="glance-db-sync" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.478351 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d86a11-7897-4196-93bb-916b7472a6e0" containerName="glance-db-sync" Feb 17 15:46:52.479670 master-0 kubenswrapper[26425]: I0217 15:46:52.478402 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12849e-ca95-4cf1-9374-46d1c8d4874b" containerName="mariadb-database-create" Feb 17 15:46:52.481913 master-0 kubenswrapper[26425]: I0217 15:46:52.481051 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:46:52.481913 master-0 kubenswrapper[26425]: I0217 15:46:52.481170 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.576649 master-0 kubenswrapper[26425]: I0217 15:46:52.576282 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts\") pod \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " Feb 17 15:46:52.576649 master-0 kubenswrapper[26425]: I0217 15:46:52.576572 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krkst\" (UniqueName: \"kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst\") pod \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\" (UID: \"cd12849e-ca95-4cf1-9374-46d1c8d4874b\") " Feb 17 15:46:52.576891 master-0 kubenswrapper[26425]: I0217 15:46:52.576868 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.576928 master-0 kubenswrapper[26425]: I0217 15:46:52.576892 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.576960 master-0 kubenswrapper[26425]: I0217 15:46:52.576906 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd12849e-ca95-4cf1-9374-46d1c8d4874b" (UID: "cd12849e-ca95-4cf1-9374-46d1c8d4874b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:52.577142 master-0 kubenswrapper[26425]: I0217 15:46:52.577105 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplz9\" (UniqueName: \"kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.577196 master-0 kubenswrapper[26425]: I0217 15:46:52.577179 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.577391 master-0 kubenswrapper[26425]: I0217 15:46:52.577363 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.577445 master-0 kubenswrapper[26425]: I0217 15:46:52.577428 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.577662 master-0 kubenswrapper[26425]: I0217 15:46:52.577644 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd12849e-ca95-4cf1-9374-46d1c8d4874b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:52.581529 master-0 kubenswrapper[26425]: I0217 15:46:52.579871 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst" (OuterVolumeSpecName: "kube-api-access-krkst") pod "cd12849e-ca95-4cf1-9374-46d1c8d4874b" (UID: "cd12849e-ca95-4cf1-9374-46d1c8d4874b"). InnerVolumeSpecName "kube-api-access-krkst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:52.688056 master-0 kubenswrapper[26425]: I0217 15:46:52.688002 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688175 master-0 kubenswrapper[26425]: I0217 15:46:52.688060 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688175 master-0 kubenswrapper[26425]: I0217 15:46:52.688130 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplz9\" (UniqueName: \"kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688175 master-0 kubenswrapper[26425]: I0217 15:46:52.688158 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688298 master-0 kubenswrapper[26425]: I0217 15:46:52.688223 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688298 master-0 kubenswrapper[26425]: I0217 15:46:52.688252 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.688374 master-0 kubenswrapper[26425]: I0217 15:46:52.688318 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krkst\" (UniqueName: \"kubernetes.io/projected/cd12849e-ca95-4cf1-9374-46d1c8d4874b-kube-api-access-krkst\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:52.689479 master-0 kubenswrapper[26425]: I0217 15:46:52.689438 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.696555 master-0 kubenswrapper[26425]: I0217 15:46:52.696493 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.697251 master-0 kubenswrapper[26425]: I0217 15:46:52.697216 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.698722 master-0 kubenswrapper[26425]: I0217 15:46:52.698688 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.700006 master-0 kubenswrapper[26425]: I0217 15:46:52.699973 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.724861 master-0 kubenswrapper[26425]: I0217 15:46:52.724813 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplz9\" (UniqueName: \"kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9\") pod \"dnsmasq-dns-676f54c559-bfcw7\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.860224 master-0 kubenswrapper[26425]: I0217 15:46:52.860090 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:52.974278 master-0 kubenswrapper[26425]: I0217 15:46:52.974187 26425 generic.go:334] "Generic (PLEG): container finished" podID="35c74619-8f42-4b4a-91eb-8343d4467669" containerID="0782b30e859b5fc0407cb775aa4db0fa1dc3026be61690452f75aad0ea7e56c4" exitCode=0 Feb 17 15:46:52.974798 master-0 kubenswrapper[26425]: I0217 15:46:52.974373 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" event={"ID":"35c74619-8f42-4b4a-91eb-8343d4467669","Type":"ContainerDied","Data":"0782b30e859b5fc0407cb775aa4db0fa1dc3026be61690452f75aad0ea7e56c4"} Feb 17 15:46:52.974798 master-0 kubenswrapper[26425]: I0217 15:46:52.974418 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" event={"ID":"35c74619-8f42-4b4a-91eb-8343d4467669","Type":"ContainerDied","Data":"8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1"} Feb 17 15:46:52.974798 master-0 kubenswrapper[26425]: I0217 15:46:52.974434 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eec8be12401ffcddc6fc14ff4bf34f0db13ab6b70dbe63268e811a72ca147c1" Feb 17 15:46:52.983504 master-0 kubenswrapper[26425]: I0217 15:46:52.983414 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5fmzp" Feb 17 15:46:52.984812 master-0 kubenswrapper[26425]: I0217 15:46:52.984685 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5fmzp" event={"ID":"cd12849e-ca95-4cf1-9374-46d1c8d4874b","Type":"ContainerDied","Data":"8a92da21e534787332e1ab6f1bfe9483565192db9ad8d799de40b7994c74396f"} Feb 17 15:46:52.985109 master-0 kubenswrapper[26425]: I0217 15:46:52.985005 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a92da21e534787332e1ab6f1bfe9483565192db9ad8d799de40b7994c74396f" Feb 17 15:46:53.076900 master-0 kubenswrapper[26425]: I0217 15:46:53.075445 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198617 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198700 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198817 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc6bq\" (UniqueName: \"kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198846 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198893 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.206570 master-0 kubenswrapper[26425]: I0217 15:46:53.198961 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0\") pod \"35c74619-8f42-4b4a-91eb-8343d4467669\" (UID: \"35c74619-8f42-4b4a-91eb-8343d4467669\") " Feb 17 15:46:53.212077 master-0 kubenswrapper[26425]: I0217 15:46:53.212006 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq" (OuterVolumeSpecName: "kube-api-access-pc6bq") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "kube-api-access-pc6bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:53.256245 master-0 kubenswrapper[26425]: I0217 15:46:53.256178 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config" (OuterVolumeSpecName: "config") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:53.257602 master-0 kubenswrapper[26425]: I0217 15:46:53.257530 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:53.263319 master-0 kubenswrapper[26425]: I0217 15:46:53.263222 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:53.277853 master-0 kubenswrapper[26425]: I0217 15:46:53.277796 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:53.288398 master-0 kubenswrapper[26425]: I0217 15:46:53.288333 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35c74619-8f42-4b4a-91eb-8343d4467669" (UID: "35c74619-8f42-4b4a-91eb-8343d4467669"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302228 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302283 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302294 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc6bq\" (UniqueName: \"kubernetes.io/projected/35c74619-8f42-4b4a-91eb-8343d4467669-kube-api-access-pc6bq\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302304 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302314 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.306942 master-0 kubenswrapper[26425]: I0217 15:46:53.302323 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35c74619-8f42-4b4a-91eb-8343d4467669-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:53.602431 master-0 kubenswrapper[26425]: I0217 15:46:53.602282 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:46:53.992419 master-0 kubenswrapper[26425]: I0217 15:46:53.992350 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-m7s4w" Feb 17 15:46:54.048878 master-0 kubenswrapper[26425]: I0217 15:46:54.048813 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:54.061573 master-0 kubenswrapper[26425]: I0217 15:46:54.061512 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-m7s4w"] Feb 17 15:46:54.411978 master-0 kubenswrapper[26425]: I0217 15:46:54.411923 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" path="/var/lib/kubelet/pods/35c74619-8f42-4b4a-91eb-8343d4467669/volumes" Feb 17 15:46:56.567672 master-0 kubenswrapper[26425]: I0217 15:46:56.567603 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:56.578017 master-0 kubenswrapper[26425]: I0217 15:46:56.577958 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:56.583020 master-0 kubenswrapper[26425]: I0217 15:46:56.582998 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684212 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts\") pod \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684317 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrgqp\" (UniqueName: \"kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp\") pod \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\" (UID: \"e1197573-c94d-4d3c-9cd1-01b65ef0ec42\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684381 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqwcw\" (UniqueName: \"kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw\") pod \"aa34f651-bd45-4add-b97d-8e5194d3edf0\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684426 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts\") pod \"733a610a-8f50-42dd-b159-6fd6a8959971\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684449 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fzl\" (UniqueName: \"kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl\") pod \"733a610a-8f50-42dd-b159-6fd6a8959971\" (UID: \"733a610a-8f50-42dd-b159-6fd6a8959971\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.684495 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts\") pod \"aa34f651-bd45-4add-b97d-8e5194d3edf0\" (UID: \"aa34f651-bd45-4add-b97d-8e5194d3edf0\") " Feb 17 15:46:56.686475 master-0 kubenswrapper[26425]: I0217 15:46:56.686104 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1197573-c94d-4d3c-9cd1-01b65ef0ec42" (UID: "e1197573-c94d-4d3c-9cd1-01b65ef0ec42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:56.697105 master-0 kubenswrapper[26425]: I0217 15:46:56.696945 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp" (OuterVolumeSpecName: "kube-api-access-zrgqp") pod "e1197573-c94d-4d3c-9cd1-01b65ef0ec42" (UID: "e1197573-c94d-4d3c-9cd1-01b65ef0ec42"). InnerVolumeSpecName "kube-api-access-zrgqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:56.697227 master-0 kubenswrapper[26425]: I0217 15:46:56.697187 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "733a610a-8f50-42dd-b159-6fd6a8959971" (UID: "733a610a-8f50-42dd-b159-6fd6a8959971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:56.698726 master-0 kubenswrapper[26425]: I0217 15:46:56.698496 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw" (OuterVolumeSpecName: "kube-api-access-qqwcw") pod "aa34f651-bd45-4add-b97d-8e5194d3edf0" (UID: "aa34f651-bd45-4add-b97d-8e5194d3edf0"). InnerVolumeSpecName "kube-api-access-qqwcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:56.700865 master-0 kubenswrapper[26425]: I0217 15:46:56.700793 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa34f651-bd45-4add-b97d-8e5194d3edf0" (UID: "aa34f651-bd45-4add-b97d-8e5194d3edf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:46:56.706574 master-0 kubenswrapper[26425]: I0217 15:46:56.706512 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl" (OuterVolumeSpecName: "kube-api-access-c2fzl") pod "733a610a-8f50-42dd-b159-6fd6a8959971" (UID: "733a610a-8f50-42dd-b159-6fd6a8959971"). InnerVolumeSpecName "kube-api-access-c2fzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:46:56.786829 master-0 kubenswrapper[26425]: I0217 15:46:56.786777 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:56.786829 master-0 kubenswrapper[26425]: I0217 15:46:56.786820 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrgqp\" (UniqueName: \"kubernetes.io/projected/e1197573-c94d-4d3c-9cd1-01b65ef0ec42-kube-api-access-zrgqp\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:56.786829 master-0 kubenswrapper[26425]: I0217 15:46:56.786831 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqwcw\" (UniqueName: \"kubernetes.io/projected/aa34f651-bd45-4add-b97d-8e5194d3edf0-kube-api-access-qqwcw\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:56.786995 master-0 kubenswrapper[26425]: I0217 15:46:56.786842 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/733a610a-8f50-42dd-b159-6fd6a8959971-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:56.786995 master-0 kubenswrapper[26425]: I0217 15:46:56.786852 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2fzl\" (UniqueName: \"kubernetes.io/projected/733a610a-8f50-42dd-b159-6fd6a8959971-kube-api-access-c2fzl\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:56.786995 master-0 kubenswrapper[26425]: I0217 15:46:56.786860 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34f651-bd45-4add-b97d-8e5194d3edf0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:46:57.054571 master-0 kubenswrapper[26425]: I0217 15:46:57.054490 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g9g6p" event={"ID":"733a610a-8f50-42dd-b159-6fd6a8959971","Type":"ContainerDied","Data":"b99fe484ab8385b20d7906f35ed06a2907f9cc83855d5cf135061bb2e61cadaf"} Feb 17 15:46:57.054571 master-0 kubenswrapper[26425]: I0217 15:46:57.054568 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99fe484ab8385b20d7906f35ed06a2907f9cc83855d5cf135061bb2e61cadaf" Feb 17 15:46:57.054571 master-0 kubenswrapper[26425]: I0217 15:46:57.054424 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g9g6p" Feb 17 15:46:57.075299 master-0 kubenswrapper[26425]: I0217 15:46:57.075142 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-be98-account-create-update-ccwpm" event={"ID":"aa34f651-bd45-4add-b97d-8e5194d3edf0","Type":"ContainerDied","Data":"16868686575f9b92e0721c57315ba74ae58d87b6d2699db850bbecabb61e9a1b"} Feb 17 15:46:57.075299 master-0 kubenswrapper[26425]: I0217 15:46:57.075227 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16868686575f9b92e0721c57315ba74ae58d87b6d2699db850bbecabb61e9a1b" Feb 17 15:46:57.075299 master-0 kubenswrapper[26425]: I0217 15:46:57.075173 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-be98-account-create-update-ccwpm" Feb 17 15:46:57.079074 master-0 kubenswrapper[26425]: I0217 15:46:57.079007 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dqtpw" event={"ID":"a5d5e735-50f8-40f8-b410-7bf5d95fadc4","Type":"ContainerStarted","Data":"620c3945d3633a038fb50fa5312a4e140308e131791efb0608f980fba0b6aaf8"} Feb 17 15:46:57.082803 master-0 kubenswrapper[26425]: I0217 15:46:57.082759 26425 generic.go:334] "Generic (PLEG): container finished" podID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerID="7cd0a0a009215c0c696273206b368632b3b50eaf1ea60119ece9b4c362a12348" exitCode=0 Feb 17 15:46:57.082875 master-0 kubenswrapper[26425]: I0217 15:46:57.082819 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" event={"ID":"e6b74389-6837-4f8d-8bd0-874f966d48cc","Type":"ContainerDied","Data":"7cd0a0a009215c0c696273206b368632b3b50eaf1ea60119ece9b4c362a12348"} Feb 17 15:46:57.082912 master-0 kubenswrapper[26425]: I0217 15:46:57.082894 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" event={"ID":"e6b74389-6837-4f8d-8bd0-874f966d48cc","Type":"ContainerStarted","Data":"f8b54589142a097bc70c55016e38d2e4c6486090469bd4fde912a11fec092ab3"} Feb 17 15:46:57.085935 master-0 kubenswrapper[26425]: I0217 15:46:57.085890 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-406d-account-create-update-qv9dz" event={"ID":"e1197573-c94d-4d3c-9cd1-01b65ef0ec42","Type":"ContainerDied","Data":"e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505"} Feb 17 15:46:57.085935 master-0 kubenswrapper[26425]: I0217 15:46:57.085923 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e11bf0900df8f146a43ceab4eb50067d908e55ca3aa817ba4c3b0702bc601505" Feb 17 15:46:57.086069 master-0 kubenswrapper[26425]: I0217 15:46:57.085999 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-406d-account-create-update-qv9dz" Feb 17 15:46:57.120406 master-0 kubenswrapper[26425]: I0217 15:46:57.116963 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dqtpw" podStartSLOduration=2.184784769 podStartE2EDuration="8.116945421s" podCreationTimestamp="2026-02-17 15:46:49 +0000 UTC" firstStartedPulling="2026-02-17 15:46:50.819336285 +0000 UTC m=+1872.711060103" lastFinishedPulling="2026-02-17 15:46:56.751496947 +0000 UTC m=+1878.643220755" observedRunningTime="2026-02-17 15:46:57.108479308 +0000 UTC m=+1879.000203146" watchObservedRunningTime="2026-02-17 15:46:57.116945421 +0000 UTC m=+1879.008669239" Feb 17 15:46:57.923307 master-0 kubenswrapper[26425]: I0217 15:46:57.923225 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 15:46:58.100214 master-0 kubenswrapper[26425]: I0217 15:46:58.100132 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" event={"ID":"e6b74389-6837-4f8d-8bd0-874f966d48cc","Type":"ContainerStarted","Data":"d7c5a0004f88dfefdf7a1cd8d432d61e23341a1ea2ba41fcbe5a0664a569bdeb"} Feb 17 15:46:58.100684 master-0 kubenswrapper[26425]: I0217 15:46:58.100667 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:46:58.155323 master-0 kubenswrapper[26425]: I0217 15:46:58.155207 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" podStartSLOduration=6.15518426 podStartE2EDuration="6.15518426s" podCreationTimestamp="2026-02-17 15:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:46:58.132287371 +0000 UTC m=+1880.024011219" watchObservedRunningTime="2026-02-17 15:46:58.15518426 +0000 UTC m=+1880.046908088" Feb 17 15:47:02.158932 master-0 kubenswrapper[26425]: I0217 15:47:02.158874 26425 generic.go:334] "Generic (PLEG): container finished" podID="a5d5e735-50f8-40f8-b410-7bf5d95fadc4" containerID="620c3945d3633a038fb50fa5312a4e140308e131791efb0608f980fba0b6aaf8" exitCode=0 Feb 17 15:47:02.158932 master-0 kubenswrapper[26425]: I0217 15:47:02.158932 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dqtpw" event={"ID":"a5d5e735-50f8-40f8-b410-7bf5d95fadc4","Type":"ContainerDied","Data":"620c3945d3633a038fb50fa5312a4e140308e131791efb0608f980fba0b6aaf8"} Feb 17 15:47:02.862275 master-0 kubenswrapper[26425]: I0217 15:47:02.862220 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:47:02.969994 master-0 kubenswrapper[26425]: I0217 15:47:02.969922 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:47:02.970425 master-0 kubenswrapper[26425]: I0217 15:47:02.970300 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="dnsmasq-dns" containerID="cri-o://92fee4fc7997cb3566a6b5d9545ef497de8da0d9eef826d1638fc018daec1f70" gracePeriod=10 Feb 17 15:47:03.191559 master-0 kubenswrapper[26425]: I0217 15:47:03.190591 26425 generic.go:334] "Generic (PLEG): container finished" podID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerID="92fee4fc7997cb3566a6b5d9545ef497de8da0d9eef826d1638fc018daec1f70" exitCode=0 Feb 17 15:47:03.191559 master-0 kubenswrapper[26425]: I0217 15:47:03.190644 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" event={"ID":"3519978a-5c7a-4466-9ad6-5750be0683e2","Type":"ContainerDied","Data":"92fee4fc7997cb3566a6b5d9545ef497de8da0d9eef826d1638fc018daec1f70"} Feb 17 15:47:03.528572 master-0 kubenswrapper[26425]: I0217 15:47:03.519806 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:47:03.675382 master-0 kubenswrapper[26425]: I0217 15:47:03.675269 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc\") pod \"3519978a-5c7a-4466-9ad6-5750be0683e2\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " Feb 17 15:47:03.675382 master-0 kubenswrapper[26425]: I0217 15:47:03.675337 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8zr4\" (UniqueName: \"kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4\") pod \"3519978a-5c7a-4466-9ad6-5750be0683e2\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " Feb 17 15:47:03.675382 master-0 kubenswrapper[26425]: I0217 15:47:03.675382 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config\") pod \"3519978a-5c7a-4466-9ad6-5750be0683e2\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " Feb 17 15:47:03.675659 master-0 kubenswrapper[26425]: I0217 15:47:03.675426 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb\") pod \"3519978a-5c7a-4466-9ad6-5750be0683e2\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " Feb 17 15:47:03.675659 master-0 kubenswrapper[26425]: I0217 15:47:03.675480 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb\") pod \"3519978a-5c7a-4466-9ad6-5750be0683e2\" (UID: \"3519978a-5c7a-4466-9ad6-5750be0683e2\") " Feb 17 15:47:03.678547 master-0 kubenswrapper[26425]: I0217 15:47:03.678440 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4" (OuterVolumeSpecName: "kube-api-access-d8zr4") pod "3519978a-5c7a-4466-9ad6-5750be0683e2" (UID: "3519978a-5c7a-4466-9ad6-5750be0683e2"). InnerVolumeSpecName "kube-api-access-d8zr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:03.679812 master-0 kubenswrapper[26425]: I0217 15:47:03.679730 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8zr4\" (UniqueName: \"kubernetes.io/projected/3519978a-5c7a-4466-9ad6-5750be0683e2-kube-api-access-d8zr4\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.734337 master-0 kubenswrapper[26425]: I0217 15:47:03.733178 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config" (OuterVolumeSpecName: "config") pod "3519978a-5c7a-4466-9ad6-5750be0683e2" (UID: "3519978a-5c7a-4466-9ad6-5750be0683e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:03.737706 master-0 kubenswrapper[26425]: I0217 15:47:03.737654 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3519978a-5c7a-4466-9ad6-5750be0683e2" (UID: "3519978a-5c7a-4466-9ad6-5750be0683e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:03.744670 master-0 kubenswrapper[26425]: I0217 15:47:03.744614 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3519978a-5c7a-4466-9ad6-5750be0683e2" (UID: "3519978a-5c7a-4466-9ad6-5750be0683e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:03.755240 master-0 kubenswrapper[26425]: I0217 15:47:03.755180 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:47:03.760706 master-0 kubenswrapper[26425]: I0217 15:47:03.760638 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3519978a-5c7a-4466-9ad6-5750be0683e2" (UID: "3519978a-5c7a-4466-9ad6-5750be0683e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:03.799704 master-0 kubenswrapper[26425]: I0217 15:47:03.784801 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.799704 master-0 kubenswrapper[26425]: I0217 15:47:03.784846 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.799704 master-0 kubenswrapper[26425]: I0217 15:47:03.784861 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.799704 master-0 kubenswrapper[26425]: I0217 15:47:03.784869 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3519978a-5c7a-4466-9ad6-5750be0683e2-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.886153 master-0 kubenswrapper[26425]: I0217 15:47:03.886072 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle\") pod \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " Feb 17 15:47:03.886378 master-0 kubenswrapper[26425]: I0217 15:47:03.886210 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vhn\" (UniqueName: \"kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn\") pod \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " Feb 17 15:47:03.886378 master-0 kubenswrapper[26425]: I0217 15:47:03.886283 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data\") pod \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\" (UID: \"a5d5e735-50f8-40f8-b410-7bf5d95fadc4\") " Feb 17 15:47:03.890831 master-0 kubenswrapper[26425]: I0217 15:47:03.890808 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn" (OuterVolumeSpecName: "kube-api-access-b4vhn") pod "a5d5e735-50f8-40f8-b410-7bf5d95fadc4" (UID: "a5d5e735-50f8-40f8-b410-7bf5d95fadc4"). InnerVolumeSpecName "kube-api-access-b4vhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:03.929816 master-0 kubenswrapper[26425]: I0217 15:47:03.929682 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5d5e735-50f8-40f8-b410-7bf5d95fadc4" (UID: "a5d5e735-50f8-40f8-b410-7bf5d95fadc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:03.935550 master-0 kubenswrapper[26425]: I0217 15:47:03.935523 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data" (OuterVolumeSpecName: "config-data") pod "a5d5e735-50f8-40f8-b410-7bf5d95fadc4" (UID: "a5d5e735-50f8-40f8-b410-7bf5d95fadc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:03.988554 master-0 kubenswrapper[26425]: I0217 15:47:03.988106 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.988554 master-0 kubenswrapper[26425]: I0217 15:47:03.988147 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4vhn\" (UniqueName: \"kubernetes.io/projected/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-kube-api-access-b4vhn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:03.988554 master-0 kubenswrapper[26425]: I0217 15:47:03.988160 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5e735-50f8-40f8-b410-7bf5d95fadc4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:04.202339 master-0 kubenswrapper[26425]: I0217 15:47:04.202082 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dqtpw" event={"ID":"a5d5e735-50f8-40f8-b410-7bf5d95fadc4","Type":"ContainerDied","Data":"547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471"} Feb 17 15:47:04.202339 master-0 kubenswrapper[26425]: I0217 15:47:04.202114 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dqtpw" Feb 17 15:47:04.202339 master-0 kubenswrapper[26425]: I0217 15:47:04.202130 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547a38f967edcc10e771fec230fd6452a0f7a0e254015cb205156497674cc471" Feb 17 15:47:04.205769 master-0 kubenswrapper[26425]: I0217 15:47:04.205694 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" event={"ID":"3519978a-5c7a-4466-9ad6-5750be0683e2","Type":"ContainerDied","Data":"3e2a990f071946ced6ab02f916d710024beb43ad13c5ddcdcf6b66a10e5ed526"} Feb 17 15:47:04.205769 master-0 kubenswrapper[26425]: I0217 15:47:04.205764 26425 scope.go:117] "RemoveContainer" containerID="92fee4fc7997cb3566a6b5d9545ef497de8da0d9eef826d1638fc018daec1f70" Feb 17 15:47:04.205962 master-0 kubenswrapper[26425]: I0217 15:47:04.205790 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-55jsp" Feb 17 15:47:04.233072 master-0 kubenswrapper[26425]: I0217 15:47:04.231432 26425 scope.go:117] "RemoveContainer" containerID="089ed7ad769beba95ba699f2afa1f85857193ed8e315d81ed338197ff5300062" Feb 17 15:47:04.271649 master-0 kubenswrapper[26425]: I0217 15:47:04.270653 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:47:04.286478 master-0 kubenswrapper[26425]: I0217 15:47:04.286269 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-55jsp"] Feb 17 15:47:04.361684 master-0 kubenswrapper[26425]: I0217 15:47:04.360015 26425 trace.go:236] Trace[1466613758]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (17-Feb-2026 15:47:02.957) (total time: 1402ms): Feb 17 15:47:04.361684 master-0 kubenswrapper[26425]: Trace[1466613758]: [1.402230467s] [1.402230467s] END Feb 17 15:47:04.499412 master-0 kubenswrapper[26425]: I0217 15:47:04.499352 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" path="/var/lib/kubelet/pods/3519978a-5c7a-4466-9ad6-5750be0683e2/volumes" Feb 17 15:47:04.509440 master-0 kubenswrapper[26425]: I0217 15:47:04.509007 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:04.509440 master-0 kubenswrapper[26425]: E0217 15:47:04.509392 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="init" Feb 17 15:47:04.509440 master-0 kubenswrapper[26425]: I0217 15:47:04.509407 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="init" Feb 17 15:47:04.509440 master-0 kubenswrapper[26425]: E0217 15:47:04.509433 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="733a610a-8f50-42dd-b159-6fd6a8959971" containerName="mariadb-database-create" Feb 17 15:47:04.509440 master-0 kubenswrapper[26425]: I0217 15:47:04.509443 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="733a610a-8f50-42dd-b159-6fd6a8959971" containerName="mariadb-database-create" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509476 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="dnsmasq-dns" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509489 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="dnsmasq-dns" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509507 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa34f651-bd45-4add-b97d-8e5194d3edf0" containerName="mariadb-account-create-update" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509516 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa34f651-bd45-4add-b97d-8e5194d3edf0" containerName="mariadb-account-create-update" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509540 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="init" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509549 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="init" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509578 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1197573-c94d-4d3c-9cd1-01b65ef0ec42" containerName="mariadb-account-create-update" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509588 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1197573-c94d-4d3c-9cd1-01b65ef0ec42" containerName="mariadb-account-create-update" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509611 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d5e735-50f8-40f8-b410-7bf5d95fadc4" containerName="keystone-db-sync" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509619 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d5e735-50f8-40f8-b410-7bf5d95fadc4" containerName="keystone-db-sync" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: E0217 15:47:04.509631 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="dnsmasq-dns" Feb 17 15:47:04.509810 master-0 kubenswrapper[26425]: I0217 15:47:04.509638 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="dnsmasq-dns" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.509929 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1197573-c94d-4d3c-9cd1-01b65ef0ec42" containerName="mariadb-account-create-update" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.510809 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa34f651-bd45-4add-b97d-8e5194d3edf0" containerName="mariadb-account-create-update" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.510843 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3519978a-5c7a-4466-9ad6-5750be0683e2" containerName="dnsmasq-dns" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.510857 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c74619-8f42-4b4a-91eb-8343d4467669" containerName="dnsmasq-dns" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.510872 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="733a610a-8f50-42dd-b159-6fd6a8959971" containerName="mariadb-database-create" Feb 17 15:47:04.512110 master-0 kubenswrapper[26425]: I0217 15:47:04.510887 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d5e735-50f8-40f8-b410-7bf5d95fadc4" containerName="keystone-db-sync" Feb 17 15:47:04.512905 master-0 kubenswrapper[26425]: I0217 15:47:04.512200 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7jqwh"] Feb 17 15:47:04.513237 master-0 kubenswrapper[26425]: I0217 15:47:04.513185 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.513531 master-0 kubenswrapper[26425]: I0217 15:47:04.513502 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.516718 master-0 kubenswrapper[26425]: I0217 15:47:04.515367 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 15:47:04.518351 master-0 kubenswrapper[26425]: I0217 15:47:04.517472 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:04.519052 master-0 kubenswrapper[26425]: I0217 15:47:04.519011 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 15:47:04.524589 master-0 kubenswrapper[26425]: I0217 15:47:04.519312 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 15:47:04.524589 master-0 kubenswrapper[26425]: I0217 15:47:04.520064 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 15:47:04.528397 master-0 kubenswrapper[26425]: I0217 15:47:04.528335 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7jqwh"] Feb 17 15:47:04.613682 master-0 kubenswrapper[26425]: I0217 15:47:04.613634 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.614011 master-0 kubenswrapper[26425]: I0217 15:47:04.613983 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.614163 master-0 kubenswrapper[26425]: I0217 15:47:04.614140 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.614345 master-0 kubenswrapper[26425]: I0217 15:47:04.614324 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.614485 master-0 kubenswrapper[26425]: I0217 15:47:04.614445 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.614673 master-0 kubenswrapper[26425]: I0217 15:47:04.614649 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56jv\" (UniqueName: \"kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.614826 master-0 kubenswrapper[26425]: I0217 15:47:04.614803 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdd94\" (UniqueName: \"kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.614952 master-0 kubenswrapper[26425]: I0217 15:47:04.614937 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.615077 master-0 kubenswrapper[26425]: I0217 15:47:04.615061 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.615158 master-0 kubenswrapper[26425]: I0217 15:47:04.615144 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.615248 master-0 kubenswrapper[26425]: I0217 15:47:04.615235 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.615479 master-0 kubenswrapper[26425]: I0217 15:47:04.615405 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.717208 master-0 kubenswrapper[26425]: I0217 15:47:04.717149 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.717399 master-0 kubenswrapper[26425]: I0217 15:47:04.717218 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.717662 master-0 kubenswrapper[26425]: I0217 15:47:04.717574 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.717754 master-0 kubenswrapper[26425]: I0217 15:47:04.717726 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.717915 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56jv\" (UniqueName: \"kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718070 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdd94\" (UniqueName: \"kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718176 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718254 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718351 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718401 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718503 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718601 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718659 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.718749 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.719676 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.719800 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.720684 master-0 kubenswrapper[26425]: I0217 15:47:04.719931 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.721122 master-0 kubenswrapper[26425]: I0217 15:47:04.720988 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.721282 master-0 kubenswrapper[26425]: I0217 15:47:04.721250 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.722085 master-0 kubenswrapper[26425]: I0217 15:47:04.722054 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.722641 master-0 kubenswrapper[26425]: I0217 15:47:04.722608 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.723518 master-0 kubenswrapper[26425]: I0217 15:47:04.723428 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.770481 master-0 kubenswrapper[26425]: I0217 15:47:04.758347 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-hgvqn"] Feb 17 15:47:04.770481 master-0 kubenswrapper[26425]: I0217 15:47:04.760105 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:04.770481 master-0 kubenswrapper[26425]: I0217 15:47:04.760308 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdd94\" (UniqueName: \"kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94\") pod \"keystone-bootstrap-7jqwh\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.788380 master-0 kubenswrapper[26425]: I0217 15:47:04.788310 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56jv\" (UniqueName: \"kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv\") pod \"dnsmasq-dns-68b4779d45-4ql8j\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.835493 master-0 kubenswrapper[26425]: I0217 15:47:04.830431 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-hgvqn"] Feb 17 15:47:04.845795 master-0 kubenswrapper[26425]: I0217 15:47:04.845717 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:04.856189 master-0 kubenswrapper[26425]: I0217 15:47:04.856123 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:04.923263 master-0 kubenswrapper[26425]: I0217 15:47:04.923202 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:04.923447 master-0 kubenswrapper[26425]: I0217 15:47:04.923282 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95fh\" (UniqueName: \"kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:04.957873 master-0 kubenswrapper[26425]: I0217 15:47:04.957785 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kr2xk"] Feb 17 15:47:04.959905 master-0 kubenswrapper[26425]: I0217 15:47:04.959869 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:04.962781 master-0 kubenswrapper[26425]: I0217 15:47:04.962510 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 15:47:04.962781 master-0 kubenswrapper[26425]: I0217 15:47:04.962594 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 15:47:04.981426 master-0 kubenswrapper[26425]: I0217 15:47:04.981228 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-db-sync-smx72"] Feb 17 15:47:04.982734 master-0 kubenswrapper[26425]: I0217 15:47:04.982706 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:04.985002 master-0 kubenswrapper[26425]: I0217 15:47:04.984963 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-config-data" Feb 17 15:47:04.985082 master-0 kubenswrapper[26425]: I0217 15:47:04.985058 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-scripts" Feb 17 15:47:05.024813 master-0 kubenswrapper[26425]: I0217 15:47:05.024763 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:05.024923 master-0 kubenswrapper[26425]: I0217 15:47:05.024836 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95fh\" (UniqueName: \"kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:05.025705 master-0 kubenswrapper[26425]: I0217 15:47:05.025671 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:05.102534 master-0 kubenswrapper[26425]: I0217 15:47:05.102222 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kr2xk"] Feb 17 15:47:05.128598 master-0 kubenswrapper[26425]: I0217 15:47:05.128540 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-db-sync-smx72"] Feb 17 15:47:05.129932 master-0 kubenswrapper[26425]: I0217 15:47:05.129898 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.130051 master-0 kubenswrapper[26425]: I0217 15:47:05.130036 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.130168 master-0 kubenswrapper[26425]: I0217 15:47:05.130144 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlq9\" (UniqueName: \"kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.130294 master-0 kubenswrapper[26425]: I0217 15:47:05.130277 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.130396 master-0 kubenswrapper[26425]: I0217 15:47:05.130378 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.130545 master-0 kubenswrapper[26425]: I0217 15:47:05.130527 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qd7\" (UniqueName: \"kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.130807 master-0 kubenswrapper[26425]: I0217 15:47:05.130774 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.130948 master-0 kubenswrapper[26425]: I0217 15:47:05.130929 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.131074 master-0 kubenswrapper[26425]: I0217 15:47:05.131054 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.241515 master-0 kubenswrapper[26425]: I0217 15:47:05.241357 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241572 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241618 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241649 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241690 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241732 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzlq9\" (UniqueName: \"kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241789 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241829 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.242259 master-0 kubenswrapper[26425]: I0217 15:47:05.241890 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qd7\" (UniqueName: \"kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.242834 master-0 kubenswrapper[26425]: I0217 15:47:05.242547 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.246214 master-0 kubenswrapper[26425]: I0217 15:47:05.246159 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.247674 master-0 kubenswrapper[26425]: I0217 15:47:05.247629 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.248594 master-0 kubenswrapper[26425]: I0217 15:47:05.248381 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.248594 master-0 kubenswrapper[26425]: I0217 15:47:05.248428 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.258699 master-0 kubenswrapper[26425]: I0217 15:47:05.257907 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:05.262849 master-0 kubenswrapper[26425]: I0217 15:47:05.262819 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:05.968541 master-0 kubenswrapper[26425]: I0217 15:47:05.968404 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95fh\" (UniqueName: \"kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh\") pod \"ironic-db-create-hgvqn\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:05.972483 master-0 kubenswrapper[26425]: I0217 15:47:05.970573 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-874a-account-create-update-lhwlv"] Feb 17 15:47:05.972483 master-0 kubenswrapper[26425]: I0217 15:47:05.971867 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:05.976492 master-0 kubenswrapper[26425]: I0217 15:47:05.973752 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 17 15:47:06.070220 master-0 kubenswrapper[26425]: I0217 15:47:06.070092 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:06.172508 master-0 kubenswrapper[26425]: I0217 15:47:06.170711 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6g6q\" (UniqueName: \"kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.172508 master-0 kubenswrapper[26425]: I0217 15:47:06.170841 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.280648 master-0 kubenswrapper[26425]: I0217 15:47:06.277799 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6g6q\" (UniqueName: \"kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.280648 master-0 kubenswrapper[26425]: I0217 15:47:06.278133 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.280648 master-0 kubenswrapper[26425]: I0217 15:47:06.281399 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.363982 master-0 kubenswrapper[26425]: I0217 15:47:06.356192 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-874a-account-create-update-lhwlv"] Feb 17 15:47:06.369571 master-0 kubenswrapper[26425]: I0217 15:47:06.365411 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzlq9\" (UniqueName: \"kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9\") pod \"cinder-04ef3-db-sync-smx72\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:06.369571 master-0 kubenswrapper[26425]: I0217 15:47:06.368636 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qd7\" (UniqueName: \"kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7\") pod \"neutron-db-sync-kr2xk\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:06.381297 master-0 kubenswrapper[26425]: I0217 15:47:06.381225 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7jqwh"] Feb 17 15:47:06.426348 master-0 kubenswrapper[26425]: I0217 15:47:06.426295 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:06.495267 master-0 kubenswrapper[26425]: I0217 15:47:06.494829 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:06.531344 master-0 kubenswrapper[26425]: I0217 15:47:06.531301 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:06.578768 master-0 kubenswrapper[26425]: I0217 15:47:06.578688 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6g6q\" (UniqueName: \"kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q\") pod \"ironic-874a-account-create-update-lhwlv\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.639656 master-0 kubenswrapper[26425]: I0217 15:47:06.637020 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:06.662696 master-0 kubenswrapper[26425]: I0217 15:47:06.652784 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-hgvqn"] Feb 17 15:47:06.670970 master-0 kubenswrapper[26425]: W0217 15:47:06.670904 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad487fea_08d0_4fe4_98bc_39c6634cae41.slice/crio-d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677 WatchSource:0}: Error finding container d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677: Status 404 returned error can't find the container with id d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677 Feb 17 15:47:06.834599 master-0 kubenswrapper[26425]: I0217 15:47:06.834551 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tgjmt"] Feb 17 15:47:06.852838 master-0 kubenswrapper[26425]: I0217 15:47:06.852564 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgjmt"] Feb 17 15:47:06.852838 master-0 kubenswrapper[26425]: I0217 15:47:06.852700 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.863615 master-0 kubenswrapper[26425]: I0217 15:47:06.863523 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 15:47:06.863734 master-0 kubenswrapper[26425]: I0217 15:47:06.863720 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 15:47:06.900281 master-0 kubenswrapper[26425]: I0217 15:47:06.900213 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:06.916311 master-0 kubenswrapper[26425]: I0217 15:47:06.915790 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.916311 master-0 kubenswrapper[26425]: I0217 15:47:06.915853 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.916311 master-0 kubenswrapper[26425]: I0217 15:47:06.915913 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfcg\" (UniqueName: \"kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.916311 master-0 kubenswrapper[26425]: I0217 15:47:06.915935 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.916311 master-0 kubenswrapper[26425]: I0217 15:47:06.916056 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:06.994493 master-0 kubenswrapper[26425]: I0217 15:47:06.991583 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:47:06.994493 master-0 kubenswrapper[26425]: I0217 15:47:06.993737 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.001526 master-0 kubenswrapper[26425]: I0217 15:47:07.000727 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:47:07.018086 master-0 kubenswrapper[26425]: I0217 15:47:07.018017 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.018257 master-0 kubenswrapper[26425]: I0217 15:47:07.018112 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5tz\" (UniqueName: \"kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.018257 master-0 kubenswrapper[26425]: I0217 15:47:07.018188 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.018257 master-0 kubenswrapper[26425]: I0217 15:47:07.018223 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.018415 master-0 kubenswrapper[26425]: I0217 15:47:07.018296 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfcg\" (UniqueName: \"kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.018415 master-0 kubenswrapper[26425]: I0217 15:47:07.018321 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.018415 master-0 kubenswrapper[26425]: I0217 15:47:07.018350 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.018415 master-0 kubenswrapper[26425]: I0217 15:47:07.018401 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.018608 master-0 kubenswrapper[26425]: I0217 15:47:07.018429 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.025865 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.025940 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.026596 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.028262 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.032273 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.036399 master-0 kubenswrapper[26425]: I0217 15:47:07.035071 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.046435 master-0 kubenswrapper[26425]: I0217 15:47:07.046152 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfcg\" (UniqueName: \"kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg\") pod \"placement-db-sync-tgjmt\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.047675 master-0 kubenswrapper[26425]: I0217 15:47:07.047636 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kr2xk"] Feb 17 15:47:07.142568 master-0 kubenswrapper[26425]: I0217 15:47:07.142503 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.142795 master-0 kubenswrapper[26425]: I0217 15:47:07.142608 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.142795 master-0 kubenswrapper[26425]: I0217 15:47:07.142637 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5tz\" (UniqueName: \"kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.142795 master-0 kubenswrapper[26425]: I0217 15:47:07.142697 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.142795 master-0 kubenswrapper[26425]: I0217 15:47:07.142731 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.142795 master-0 kubenswrapper[26425]: I0217 15:47:07.142750 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.143730 master-0 kubenswrapper[26425]: I0217 15:47:07.143699 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.144992 master-0 kubenswrapper[26425]: I0217 15:47:07.144284 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.147856 master-0 kubenswrapper[26425]: I0217 15:47:07.147091 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.149956 master-0 kubenswrapper[26425]: I0217 15:47:07.149918 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.152263 master-0 kubenswrapper[26425]: I0217 15:47:07.152231 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.176600 master-0 kubenswrapper[26425]: I0217 15:47:07.176551 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5tz\" (UniqueName: \"kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz\") pod \"dnsmasq-dns-d687b68b9-7r7fm\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.226727 master-0 kubenswrapper[26425]: I0217 15:47:07.225378 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:07.238515 master-0 kubenswrapper[26425]: I0217 15:47:07.238328 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-db-sync-smx72"] Feb 17 15:47:07.295596 master-0 kubenswrapper[26425]: I0217 15:47:07.295466 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kr2xk" event={"ID":"0f2e8e8e-7b87-4127-b977-62f0c1f29717","Type":"ContainerStarted","Data":"222265af07a8b6113048150b1d3cad7185d108b805029d3a8ccc3e5518b36b8a"} Feb 17 15:47:07.298776 master-0 kubenswrapper[26425]: I0217 15:47:07.298357 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7jqwh" event={"ID":"b8ac08fd-9e1a-4c05-9293-7805453eb135","Type":"ContainerStarted","Data":"334d8a3f6f5614d74f75a560dbb15127a82e2ef6636e88347a93350957668438"} Feb 17 15:47:07.298776 master-0 kubenswrapper[26425]: I0217 15:47:07.298415 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7jqwh" event={"ID":"b8ac08fd-9e1a-4c05-9293-7805453eb135","Type":"ContainerStarted","Data":"58736aacfe2d7f7797a5d49e7e562dfaa536bbe5bcb6b936d52fd48b1a9fbd9a"} Feb 17 15:47:07.307565 master-0 kubenswrapper[26425]: I0217 15:47:07.307484 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-db-sync-smx72" event={"ID":"92cdc0bf-17bd-4554-811c-89cf8bc1a52c","Type":"ContainerStarted","Data":"418863b461328ba9361190919253798b5b06eac0e815837f9299ecf9d9141b2f"} Feb 17 15:47:07.313578 master-0 kubenswrapper[26425]: I0217 15:47:07.313446 26425 generic.go:334] "Generic (PLEG): container finished" podID="090d05ed-b86b-4aba-bbe6-71eb213db07a" containerID="2d19d4d8a7c824ac1007d2081da170479f1466f0f1c68a1d2167d4e7b924910d" exitCode=0 Feb 17 15:47:07.313578 master-0 kubenswrapper[26425]: I0217 15:47:07.313531 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" event={"ID":"090d05ed-b86b-4aba-bbe6-71eb213db07a","Type":"ContainerDied","Data":"2d19d4d8a7c824ac1007d2081da170479f1466f0f1c68a1d2167d4e7b924910d"} Feb 17 15:47:07.313578 master-0 kubenswrapper[26425]: I0217 15:47:07.313557 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" event={"ID":"090d05ed-b86b-4aba-bbe6-71eb213db07a","Type":"ContainerStarted","Data":"044500b104e210766725fadd0c8d499ae2feca8a6d720e8bd71ee95b7432a49b"} Feb 17 15:47:07.316070 master-0 kubenswrapper[26425]: I0217 15:47:07.316029 26425 generic.go:334] "Generic (PLEG): container finished" podID="ad487fea-08d0-4fe4-98bc-39c6634cae41" containerID="ae1e2b0f2885ad083bd79296b2d3432535b874e28905ecce3a8eef38ecc1ddfa" exitCode=0 Feb 17 15:47:07.316120 master-0 kubenswrapper[26425]: I0217 15:47:07.316084 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgvqn" event={"ID":"ad487fea-08d0-4fe4-98bc-39c6634cae41","Type":"ContainerDied","Data":"ae1e2b0f2885ad083bd79296b2d3432535b874e28905ecce3a8eef38ecc1ddfa"} Feb 17 15:47:07.316120 master-0 kubenswrapper[26425]: I0217 15:47:07.316113 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgvqn" event={"ID":"ad487fea-08d0-4fe4-98bc-39c6634cae41","Type":"ContainerStarted","Data":"d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677"} Feb 17 15:47:07.342380 master-0 kubenswrapper[26425]: I0217 15:47:07.342049 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:07.405899 master-0 kubenswrapper[26425]: I0217 15:47:07.405818 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7jqwh" podStartSLOduration=3.405795854 podStartE2EDuration="3.405795854s" podCreationTimestamp="2026-02-17 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:07.328429949 +0000 UTC m=+1889.220153777" watchObservedRunningTime="2026-02-17 15:47:07.405795854 +0000 UTC m=+1889.297519672" Feb 17 15:47:07.483575 master-0 kubenswrapper[26425]: I0217 15:47:07.483519 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-874a-account-create-update-lhwlv"] Feb 17 15:47:07.672578 master-0 kubenswrapper[26425]: E0217 15:47:07.672540 26425 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 17 15:47:07.672578 master-0 kubenswrapper[26425]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/090d05ed-b86b-4aba-bbe6-71eb213db07a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 15:47:07.672578 master-0 kubenswrapper[26425]: > podSandboxID="044500b104e210766725fadd0c8d499ae2feca8a6d720e8bd71ee95b7432a49b" Feb 17 15:47:07.672863 master-0 kubenswrapper[26425]: E0217 15:47:07.672841 26425 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 17 15:47:07.672863 master-0 kubenswrapper[26425]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h56chf9h698h64dh98h5c7h5b9h5bch55ch564h59h4h5f4hc9hbh5dch58bh54dh688hc6h56ch88h677h5f4h677h5ffh646h8fh547h64ch655q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g56jv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-68b4779d45-4ql8j_openstack(090d05ed-b86b-4aba-bbe6-71eb213db07a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/090d05ed-b86b-4aba-bbe6-71eb213db07a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 15:47:07.672863 master-0 kubenswrapper[26425]: > logger="UnhandledError" Feb 17 15:47:07.676115 master-0 kubenswrapper[26425]: E0217 15:47:07.674504 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/090d05ed-b86b-4aba-bbe6-71eb213db07a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" podUID="090d05ed-b86b-4aba-bbe6-71eb213db07a" Feb 17 15:47:07.771349 master-0 kubenswrapper[26425]: I0217 15:47:07.768914 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgjmt"] Feb 17 15:47:07.933350 master-0 kubenswrapper[26425]: I0217 15:47:07.933277 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:47:07.938801 master-0 kubenswrapper[26425]: W0217 15:47:07.938725 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93130d0e_e444_4ec9_b294_aa8240b342ee.slice/crio-f633d5ecac7be141e2e640199ac58ac2c385c868624d61c3201219bb9d249c26 WatchSource:0}: Error finding container f633d5ecac7be141e2e640199ac58ac2c385c868624d61c3201219bb9d249c26: Status 404 returned error can't find the container with id f633d5ecac7be141e2e640199ac58ac2c385c868624d61c3201219bb9d249c26 Feb 17 15:47:08.335887 master-0 kubenswrapper[26425]: I0217 15:47:08.335735 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgjmt" event={"ID":"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c","Type":"ContainerStarted","Data":"4202859d502f0c80aa3da89f0df7f218754692be139c3d47ea0460e13c17414d"} Feb 17 15:47:08.346626 master-0 kubenswrapper[26425]: I0217 15:47:08.346212 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" event={"ID":"93130d0e-e444-4ec9-b294-aa8240b342ee","Type":"ContainerStarted","Data":"f633d5ecac7be141e2e640199ac58ac2c385c868624d61c3201219bb9d249c26"} Feb 17 15:47:08.349340 master-0 kubenswrapper[26425]: I0217 15:47:08.348531 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kr2xk" event={"ID":"0f2e8e8e-7b87-4127-b977-62f0c1f29717","Type":"ContainerStarted","Data":"f43308a817f8761f5f0118d50e70bd080cb1118c64446507e6a98ff0d7fe6314"} Feb 17 15:47:08.352636 master-0 kubenswrapper[26425]: I0217 15:47:08.352252 26425 generic.go:334] "Generic (PLEG): container finished" podID="a20a01dc-3034-43a8-ad78-2c3b1497c20a" containerID="d4a85a17d489bdc50a243e8f0ad1ea3a47c418d848059fa77e76575878a0991f" exitCode=0 Feb 17 15:47:08.352636 master-0 kubenswrapper[26425]: I0217 15:47:08.352521 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-874a-account-create-update-lhwlv" event={"ID":"a20a01dc-3034-43a8-ad78-2c3b1497c20a","Type":"ContainerDied","Data":"d4a85a17d489bdc50a243e8f0ad1ea3a47c418d848059fa77e76575878a0991f"} Feb 17 15:47:08.352636 master-0 kubenswrapper[26425]: I0217 15:47:08.352556 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-874a-account-create-update-lhwlv" event={"ID":"a20a01dc-3034-43a8-ad78-2c3b1497c20a","Type":"ContainerStarted","Data":"1034b8a37253e0dc35f673b305e1445cfee5143e5e68ab58ba1df81ae2f81984"} Feb 17 15:47:08.434686 master-0 kubenswrapper[26425]: I0217 15:47:08.426789 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kr2xk" podStartSLOduration=4.426766679 podStartE2EDuration="4.426766679s" podCreationTimestamp="2026-02-17 15:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:08.415720424 +0000 UTC m=+1890.307444272" watchObservedRunningTime="2026-02-17 15:47:08.426766679 +0000 UTC m=+1890.318490497" Feb 17 15:47:08.567797 master-0 kubenswrapper[26425]: I0217 15:47:08.567587 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:08.571565 master-0 kubenswrapper[26425]: I0217 15:47:08.571494 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.590184 master-0 kubenswrapper[26425]: I0217 15:47:08.578884 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 15:47:08.590184 master-0 kubenswrapper[26425]: I0217 15:47:08.586050 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-default-external-config-data" Feb 17 15:47:08.590184 master-0 kubenswrapper[26425]: I0217 15:47:08.586369 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 15:47:08.639672 master-0 kubenswrapper[26425]: I0217 15:47:08.639622 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:08.722882 master-0 kubenswrapper[26425]: I0217 15:47:08.722826 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.722934 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.722997 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.723022 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.723064 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.723105 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.723158 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kb2b\" (UniqueName: \"kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.723406 master-0 kubenswrapper[26425]: I0217 15:47:08.723251 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.824203 master-0 kubenswrapper[26425]: I0217 15:47:08.824096 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:08.824799 master-0 kubenswrapper[26425]: I0217 15:47:08.824756 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.824882 master-0 kubenswrapper[26425]: I0217 15:47:08.824836 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.825622 master-0 kubenswrapper[26425]: I0217 15:47:08.825475 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.825875 master-0 kubenswrapper[26425]: E0217 15:47:08.825814 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run kube-api-access-6kb2b logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-7b9c2-default-external-api-0" podUID="6aa28629-c245-4065-bd98-c76f7a98206c" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826035 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826151 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826262 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kb2b\" (UniqueName: \"kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826494 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826551 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827173 master-0 kubenswrapper[26425]: I0217 15:47:08.826654 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.827687 master-0 kubenswrapper[26425]: I0217 15:47:08.827642 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.830315 master-0 kubenswrapper[26425]: I0217 15:47:08.829639 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:47:08.830315 master-0 kubenswrapper[26425]: I0217 15:47:08.829704 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bb1a31da58028daaa8c5693dab9c5e672404499c19a6cf0daa664dd723747ec1/globalmount\"" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.842645 master-0 kubenswrapper[26425]: I0217 15:47:08.832330 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.842645 master-0 kubenswrapper[26425]: I0217 15:47:08.838510 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.842645 master-0 kubenswrapper[26425]: I0217 15:47:08.838761 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:08.843647 master-0 kubenswrapper[26425]: I0217 15:47:08.843587 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:09.030054 master-0 kubenswrapper[26425]: I0217 15:47:09.030005 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kb2b\" (UniqueName: \"kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:09.295248 master-0 kubenswrapper[26425]: I0217 15:47:09.293445 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:47:09.295510 master-0 kubenswrapper[26425]: I0217 15:47:09.295494 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.302530 master-0 kubenswrapper[26425]: I0217 15:47:09.298528 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 15:47:09.302530 master-0 kubenswrapper[26425]: I0217 15:47:09.298777 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-default-internal-config-data" Feb 17 15:47:09.324043 master-0 kubenswrapper[26425]: I0217 15:47:09.323912 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:47:09.353482 master-0 kubenswrapper[26425]: I0217 15:47:09.352586 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:09.418475 master-0 kubenswrapper[26425]: I0217 15:47:09.414128 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgvqn" event={"ID":"ad487fea-08d0-4fe4-98bc-39c6634cae41","Type":"ContainerDied","Data":"d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677"} Feb 17 15:47:09.418475 master-0 kubenswrapper[26425]: I0217 15:47:09.414181 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d631e355b6d068844d0c2f0c90395037660b03b130dd81b67fb889d45532b677" Feb 17 15:47:09.418475 master-0 kubenswrapper[26425]: I0217 15:47:09.414294 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgvqn" Feb 17 15:47:09.452755 master-0 kubenswrapper[26425]: I0217 15:47:09.452712 26425 generic.go:334] "Generic (PLEG): container finished" podID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerID="072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8" exitCode=0 Feb 17 15:47:09.453075 master-0 kubenswrapper[26425]: I0217 15:47:09.453061 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:09.453380 master-0 kubenswrapper[26425]: I0217 15:47:09.453330 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" event={"ID":"93130d0e-e444-4ec9-b294-aa8240b342ee","Type":"ContainerDied","Data":"072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8"} Feb 17 15:47:09.485129 master-0 kubenswrapper[26425]: I0217 15:47:09.484570 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts\") pod \"ad487fea-08d0-4fe4-98bc-39c6634cae41\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " Feb 17 15:47:09.485129 master-0 kubenswrapper[26425]: I0217 15:47:09.484641 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95fh\" (UniqueName: \"kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh\") pod \"ad487fea-08d0-4fe4-98bc-39c6634cae41\" (UID: \"ad487fea-08d0-4fe4-98bc-39c6634cae41\") " Feb 17 15:47:09.485402 master-0 kubenswrapper[26425]: I0217 15:47:09.485364 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad487fea-08d0-4fe4-98bc-39c6634cae41" (UID: "ad487fea-08d0-4fe4-98bc-39c6634cae41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:09.486861 master-0 kubenswrapper[26425]: I0217 15:47:09.486809 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487002 master-0 kubenswrapper[26425]: I0217 15:47:09.486934 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxw7t\" (UniqueName: \"kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487002 master-0 kubenswrapper[26425]: I0217 15:47:09.486968 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487105 master-0 kubenswrapper[26425]: I0217 15:47:09.487044 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487141 master-0 kubenswrapper[26425]: I0217 15:47:09.487121 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487175 master-0 kubenswrapper[26425]: I0217 15:47:09.487158 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487279 master-0 kubenswrapper[26425]: I0217 15:47:09.487205 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.487279 master-0 kubenswrapper[26425]: I0217 15:47:09.487240 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.492008 master-0 kubenswrapper[26425]: I0217 15:47:09.487499 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad487fea-08d0-4fe4-98bc-39c6634cae41-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.492008 master-0 kubenswrapper[26425]: I0217 15:47:09.491921 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh" (OuterVolumeSpecName: "kube-api-access-z95fh") pod "ad487fea-08d0-4fe4-98bc-39c6634cae41" (UID: "ad487fea-08d0-4fe4-98bc-39c6634cae41"). InnerVolumeSpecName "kube-api-access-z95fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589034 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxw7t\" (UniqueName: \"kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589323 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589473 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589594 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589612 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589639 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589658 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589773 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.590480 master-0 kubenswrapper[26425]: I0217 15:47:09.589859 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95fh\" (UniqueName: \"kubernetes.io/projected/ad487fea-08d0-4fe4-98bc-39c6634cae41-kube-api-access-z95fh\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.599486 master-0 kubenswrapper[26425]: I0217 15:47:09.591934 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.599486 master-0 kubenswrapper[26425]: I0217 15:47:09.595668 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.599486 master-0 kubenswrapper[26425]: I0217 15:47:09.596225 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.603171 master-0 kubenswrapper[26425]: I0217 15:47:09.600921 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.603171 master-0 kubenswrapper[26425]: I0217 15:47:09.601718 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.604151 master-0 kubenswrapper[26425]: I0217 15:47:09.604127 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.612805 master-0 kubenswrapper[26425]: I0217 15:47:09.612766 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:47:09.612889 master-0 kubenswrapper[26425]: I0217 15:47:09.612813 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0067ec78290aaf5ed99b46ed47c7cab15903d0f50e4c317ca4663ebd33bb5b9a/globalmount\"" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.613842 master-0 kubenswrapper[26425]: I0217 15:47:09.613806 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:09.615399 master-0 kubenswrapper[26425]: I0217 15:47:09.615357 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:09.628471 master-0 kubenswrapper[26425]: I0217 15:47:09.628405 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxw7t\" (UniqueName: \"kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810568 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810689 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810808 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810837 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810883 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810908 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810933 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.810963 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.811021 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kb2b\" (UniqueName: \"kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.811056 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.811109 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56jv\" (UniqueName: \"kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv\") pod \"090d05ed-b86b-4aba-bbe6-71eb213db07a\" (UID: \"090d05ed-b86b-4aba-bbe6-71eb213db07a\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.811138 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.811746 master-0 kubenswrapper[26425]: I0217 15:47:09.811190 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.818387 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs" (OuterVolumeSpecName: "logs") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.818845 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b" (OuterVolumeSpecName: "kube-api-access-6kb2b") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "kube-api-access-6kb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.819569 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.820802 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.824259 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data" (OuterVolumeSpecName: "config-data") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.827882 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv" (OuterVolumeSpecName: "kube-api-access-g56jv") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "kube-api-access-g56jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.829898 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts" (OuterVolumeSpecName: "scripts") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:09.851589 master-0 kubenswrapper[26425]: I0217 15:47:09.834763 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:09.902139 master-0 kubenswrapper[26425]: I0217 15:47:09.902078 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config" (OuterVolumeSpecName: "config") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:09.903593 master-0 kubenswrapper[26425]: I0217 15:47:09.903510 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:09.911194 master-0 kubenswrapper[26425]: I0217 15:47:09.911138 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:09.916468 master-0 kubenswrapper[26425]: I0217 15:47:09.916416 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916468 master-0 kubenswrapper[26425]: I0217 15:47:09.916449 26425 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916468 master-0 kubenswrapper[26425]: I0217 15:47:09.916472 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916481 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916490 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916505 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916529 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kb2b\" (UniqueName: \"kubernetes.io/projected/6aa28629-c245-4065-bd98-c76f7a98206c-kube-api-access-6kb2b\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916540 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916548 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56jv\" (UniqueName: \"kubernetes.io/projected/090d05ed-b86b-4aba-bbe6-71eb213db07a-kube-api-access-g56jv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916557 26425 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6aa28629-c245-4065-bd98-c76f7a98206c-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.916675 master-0 kubenswrapper[26425]: I0217 15:47:09.916566 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa28629-c245-4065-bd98-c76f7a98206c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:09.939438 master-0 kubenswrapper[26425]: I0217 15:47:09.939375 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:09.960433 master-0 kubenswrapper[26425]: I0217 15:47:09.960364 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "090d05ed-b86b-4aba-bbe6-71eb213db07a" (UID: "090d05ed-b86b-4aba-bbe6-71eb213db07a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:10.020194 master-0 kubenswrapper[26425]: I0217 15:47:10.018496 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:10.020194 master-0 kubenswrapper[26425]: I0217 15:47:10.018542 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d05ed-b86b-4aba-bbe6-71eb213db07a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:10.062337 master-0 kubenswrapper[26425]: I0217 15:47:10.062289 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:10.222009 master-0 kubenswrapper[26425]: I0217 15:47:10.221431 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts\") pod \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " Feb 17 15:47:10.222009 master-0 kubenswrapper[26425]: I0217 15:47:10.221640 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6g6q\" (UniqueName: \"kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q\") pod \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\" (UID: \"a20a01dc-3034-43a8-ad78-2c3b1497c20a\") " Feb 17 15:47:10.222009 master-0 kubenswrapper[26425]: I0217 15:47:10.221966 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a20a01dc-3034-43a8-ad78-2c3b1497c20a" (UID: "a20a01dc-3034-43a8-ad78-2c3b1497c20a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:10.222367 master-0 kubenswrapper[26425]: I0217 15:47:10.222343 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a20a01dc-3034-43a8-ad78-2c3b1497c20a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:10.225497 master-0 kubenswrapper[26425]: I0217 15:47:10.225438 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q" (OuterVolumeSpecName: "kube-api-access-g6g6q") pod "a20a01dc-3034-43a8-ad78-2c3b1497c20a" (UID: "a20a01dc-3034-43a8-ad78-2c3b1497c20a"). InnerVolumeSpecName "kube-api-access-g6g6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:10.324659 master-0 kubenswrapper[26425]: I0217 15:47:10.324608 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6g6q\" (UniqueName: \"kubernetes.io/projected/a20a01dc-3034-43a8-ad78-2c3b1497c20a-kube-api-access-g6g6q\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:10.464339 master-0 kubenswrapper[26425]: I0217 15:47:10.464294 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:10.490369 master-0 kubenswrapper[26425]: I0217 15:47:10.490304 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" event={"ID":"93130d0e-e444-4ec9-b294-aa8240b342ee","Type":"ContainerStarted","Data":"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c"} Feb 17 15:47:10.490569 master-0 kubenswrapper[26425]: I0217 15:47:10.490435 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:10.492477 master-0 kubenswrapper[26425]: I0217 15:47:10.492426 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-874a-account-create-update-lhwlv" event={"ID":"a20a01dc-3034-43a8-ad78-2c3b1497c20a","Type":"ContainerDied","Data":"1034b8a37253e0dc35f673b305e1445cfee5143e5e68ab58ba1df81ae2f81984"} Feb 17 15:47:10.492546 master-0 kubenswrapper[26425]: I0217 15:47:10.492478 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1034b8a37253e0dc35f673b305e1445cfee5143e5e68ab58ba1df81ae2f81984" Feb 17 15:47:10.492546 master-0 kubenswrapper[26425]: I0217 15:47:10.492535 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-874a-account-create-update-lhwlv" Feb 17 15:47:10.498701 master-0 kubenswrapper[26425]: I0217 15:47:10.497835 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:10.498701 master-0 kubenswrapper[26425]: I0217 15:47:10.498384 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" Feb 17 15:47:10.499528 master-0 kubenswrapper[26425]: I0217 15:47:10.498392 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b4779d45-4ql8j" event={"ID":"090d05ed-b86b-4aba-bbe6-71eb213db07a","Type":"ContainerDied","Data":"044500b104e210766725fadd0c8d499ae2feca8a6d720e8bd71ee95b7432a49b"} Feb 17 15:47:10.499528 master-0 kubenswrapper[26425]: I0217 15:47:10.499000 26425 scope.go:117] "RemoveContainer" containerID="2d19d4d8a7c824ac1007d2081da170479f1466f0f1c68a1d2167d4e7b924910d" Feb 17 15:47:10.521619 master-0 kubenswrapper[26425]: I0217 15:47:10.520690 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" podStartSLOduration=4.520663223 podStartE2EDuration="4.520663223s" podCreationTimestamp="2026-02-17 15:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:10.513968423 +0000 UTC m=+1892.405692241" watchObservedRunningTime="2026-02-17 15:47:10.520663223 +0000 UTC m=+1892.412387051" Feb 17 15:47:10.602605 master-0 kubenswrapper[26425]: I0217 15:47:10.602525 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:10.613526 master-0 kubenswrapper[26425]: I0217 15:47:10.613470 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-4ql8j"] Feb 17 15:47:10.631730 master-0 kubenswrapper[26425]: I0217 15:47:10.631677 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"6aa28629-c245-4065-bd98-c76f7a98206c\" (UID: \"6aa28629-c245-4065-bd98-c76f7a98206c\") " Feb 17 15:47:11.517097 master-0 kubenswrapper[26425]: I0217 15:47:11.517015 26425 generic.go:334] "Generic (PLEG): container finished" podID="b8ac08fd-9e1a-4c05-9293-7805453eb135" containerID="334d8a3f6f5614d74f75a560dbb15127a82e2ef6636e88347a93350957668438" exitCode=0 Feb 17 15:47:11.517097 master-0 kubenswrapper[26425]: I0217 15:47:11.517080 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7jqwh" event={"ID":"b8ac08fd-9e1a-4c05-9293-7805453eb135","Type":"ContainerDied","Data":"334d8a3f6f5614d74f75a560dbb15127a82e2ef6636e88347a93350957668438"} Feb 17 15:47:12.326750 master-0 kubenswrapper[26425]: I0217 15:47:12.326685 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:12.369374 master-0 kubenswrapper[26425]: I0217 15:47:12.369299 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:12.427818 master-0 kubenswrapper[26425]: I0217 15:47:12.427665 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090d05ed-b86b-4aba-bbe6-71eb213db07a" path="/var/lib/kubelet/pods/090d05ed-b86b-4aba-bbe6-71eb213db07a/volumes" Feb 17 15:47:12.430673 master-0 kubenswrapper[26425]: I0217 15:47:12.430627 26425 trace.go:236] Trace[781442401]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (17-Feb-2026 15:47:10.008) (total time: 2421ms): Feb 17 15:47:12.430673 master-0 kubenswrapper[26425]: Trace[781442401]: [2.421612064s] [2.421612064s] END Feb 17 15:47:12.459254 master-0 kubenswrapper[26425]: I0217 15:47:12.459136 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c" (OuterVolumeSpecName: "glance") pod "6aa28629-c245-4065-bd98-c76f7a98206c" (UID: "6aa28629-c245-4065-bd98-c76f7a98206c"). InnerVolumeSpecName "pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:47:12.484620 master-0 kubenswrapper[26425]: I0217 15:47:12.484570 26425 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") on node \"master-0\" " Feb 17 15:47:12.518429 master-0 kubenswrapper[26425]: I0217 15:47:12.518375 26425 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 15:47:12.518907 master-0 kubenswrapper[26425]: I0217 15:47:12.518570 26425 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884" (UniqueName: "kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c") on node "master-0" Feb 17 15:47:12.585796 master-0 kubenswrapper[26425]: I0217 15:47:12.585728 26425 reconciler_common.go:293] "Volume detached for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:12.736993 master-0 kubenswrapper[26425]: I0217 15:47:12.733720 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:12.744071 master-0 kubenswrapper[26425]: I0217 15:47:12.743999 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:12.766223 master-0 kubenswrapper[26425]: I0217 15:47:12.766158 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:12.766795 master-0 kubenswrapper[26425]: E0217 15:47:12.766770 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20a01dc-3034-43a8-ad78-2c3b1497c20a" containerName="mariadb-account-create-update" Feb 17 15:47:12.766887 master-0 kubenswrapper[26425]: I0217 15:47:12.766797 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20a01dc-3034-43a8-ad78-2c3b1497c20a" containerName="mariadb-account-create-update" Feb 17 15:47:12.766887 master-0 kubenswrapper[26425]: E0217 15:47:12.766835 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090d05ed-b86b-4aba-bbe6-71eb213db07a" containerName="init" Feb 17 15:47:12.766887 master-0 kubenswrapper[26425]: I0217 15:47:12.766845 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="090d05ed-b86b-4aba-bbe6-71eb213db07a" containerName="init" Feb 17 15:47:12.766887 master-0 kubenswrapper[26425]: E0217 15:47:12.766862 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad487fea-08d0-4fe4-98bc-39c6634cae41" containerName="mariadb-database-create" Feb 17 15:47:12.766887 master-0 kubenswrapper[26425]: I0217 15:47:12.766869 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad487fea-08d0-4fe4-98bc-39c6634cae41" containerName="mariadb-database-create" Feb 17 15:47:12.767187 master-0 kubenswrapper[26425]: I0217 15:47:12.767162 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20a01dc-3034-43a8-ad78-2c3b1497c20a" containerName="mariadb-account-create-update" Feb 17 15:47:12.767187 master-0 kubenswrapper[26425]: I0217 15:47:12.767180 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad487fea-08d0-4fe4-98bc-39c6634cae41" containerName="mariadb-database-create" Feb 17 15:47:12.767310 master-0 kubenswrapper[26425]: I0217 15:47:12.767203 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="090d05ed-b86b-4aba-bbe6-71eb213db07a" containerName="init" Feb 17 15:47:12.774223 master-0 kubenswrapper[26425]: I0217 15:47:12.774160 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.776348 master-0 kubenswrapper[26425]: I0217 15:47:12.776291 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 15:47:12.776482 master-0 kubenswrapper[26425]: I0217 15:47:12.776438 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-default-external-config-data" Feb 17 15:47:12.777125 master-0 kubenswrapper[26425]: I0217 15:47:12.777083 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:12.894691 master-0 kubenswrapper[26425]: I0217 15:47:12.894021 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.894691 master-0 kubenswrapper[26425]: I0217 15:47:12.894172 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.894691 master-0 kubenswrapper[26425]: I0217 15:47:12.894401 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.894691 master-0 kubenswrapper[26425]: I0217 15:47:12.894558 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.894691 master-0 kubenswrapper[26425]: I0217 15:47:12.894663 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.895051 master-0 kubenswrapper[26425]: I0217 15:47:12.894703 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.895051 master-0 kubenswrapper[26425]: I0217 15:47:12.894841 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svd7c\" (UniqueName: \"kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:12.895051 master-0 kubenswrapper[26425]: I0217 15:47:12.894920 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.996770 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.996838 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.996885 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svd7c\" (UniqueName: \"kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.996910 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.996973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.997025 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.997088 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.997122 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.998280 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:12.998294 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:13.000415 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:13.000437 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bb1a31da58028daaa8c5693dab9c5e672404499c19a6cf0daa664dd723747ec1/globalmount\"" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:13.001062 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:13.001598 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003046 master-0 kubenswrapper[26425]: I0217 15:47:13.002922 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.003893 master-0 kubenswrapper[26425]: I0217 15:47:13.003118 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.112103 master-0 kubenswrapper[26425]: I0217 15:47:13.112046 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:13.143480 master-0 kubenswrapper[26425]: I0217 15:47:13.143070 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svd7c\" (UniqueName: \"kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.200809 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdd94\" (UniqueName: \"kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.200968 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.200997 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.201028 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.201227 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.201485 master-0 kubenswrapper[26425]: I0217 15:47:13.201277 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys\") pod \"b8ac08fd-9e1a-4c05-9293-7805453eb135\" (UID: \"b8ac08fd-9e1a-4c05-9293-7805453eb135\") " Feb 17 15:47:13.209484 master-0 kubenswrapper[26425]: I0217 15:47:13.206210 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:13.209853 master-0 kubenswrapper[26425]: I0217 15:47:13.209742 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:13.231887 master-0 kubenswrapper[26425]: I0217 15:47:13.222128 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94" (OuterVolumeSpecName: "kube-api-access-cdd94") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "kube-api-access-cdd94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:13.241333 master-0 kubenswrapper[26425]: I0217 15:47:13.239828 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts" (OuterVolumeSpecName: "scripts") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:13.250718 master-0 kubenswrapper[26425]: I0217 15:47:13.250666 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:13.268055 master-0 kubenswrapper[26425]: I0217 15:47:13.267887 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data" (OuterVolumeSpecName: "config-data") pod "b8ac08fd-9e1a-4c05-9293-7805453eb135" (UID: "b8ac08fd-9e1a-4c05-9293-7805453eb135"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:13.290198 master-0 kubenswrapper[26425]: I0217 15:47:13.290108 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:47:13.299970 master-0 kubenswrapper[26425]: W0217 15:47:13.299926 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba50f35d_07b5_4db9_bc46_3ffeb03f3902.slice/crio-e3452447bdf57daf70816359d34b56fa1f99da267f5cdb153dd0b14114deaf8b WatchSource:0}: Error finding container e3452447bdf57daf70816359d34b56fa1f99da267f5cdb153dd0b14114deaf8b: Status 404 returned error can't find the container with id e3452447bdf57daf70816359d34b56fa1f99da267f5cdb153dd0b14114deaf8b Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306025 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306132 26425 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306159 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdd94\" (UniqueName: \"kubernetes.io/projected/b8ac08fd-9e1a-4c05-9293-7805453eb135-kube-api-access-cdd94\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306171 26425 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306182 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.306714 master-0 kubenswrapper[26425]: I0217 15:47:13.306195 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac08fd-9e1a-4c05-9293-7805453eb135-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:13.547592 master-0 kubenswrapper[26425]: I0217 15:47:13.547442 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7jqwh" event={"ID":"b8ac08fd-9e1a-4c05-9293-7805453eb135","Type":"ContainerDied","Data":"58736aacfe2d7f7797a5d49e7e562dfaa536bbe5bcb6b936d52fd48b1a9fbd9a"} Feb 17 15:47:13.547592 master-0 kubenswrapper[26425]: I0217 15:47:13.547556 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58736aacfe2d7f7797a5d49e7e562dfaa536bbe5bcb6b936d52fd48b1a9fbd9a" Feb 17 15:47:13.548191 master-0 kubenswrapper[26425]: I0217 15:47:13.547622 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7jqwh" Feb 17 15:47:13.557870 master-0 kubenswrapper[26425]: I0217 15:47:13.557798 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgjmt" event={"ID":"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c","Type":"ContainerStarted","Data":"bf032fea4616276011ffea11f209a24cc83f37fe4050b2355b7b86308ef6a20a"} Feb 17 15:47:13.563858 master-0 kubenswrapper[26425]: I0217 15:47:13.563806 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerStarted","Data":"e3452447bdf57daf70816359d34b56fa1f99da267f5cdb153dd0b14114deaf8b"} Feb 17 15:47:13.720613 master-0 kubenswrapper[26425]: I0217 15:47:13.714283 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tgjmt" podStartSLOduration=3.039463862 podStartE2EDuration="7.714266871s" podCreationTimestamp="2026-02-17 15:47:06 +0000 UTC" firstStartedPulling="2026-02-17 15:47:07.809793643 +0000 UTC m=+1889.701517461" lastFinishedPulling="2026-02-17 15:47:12.484596652 +0000 UTC m=+1894.376320470" observedRunningTime="2026-02-17 15:47:13.70586085 +0000 UTC m=+1895.597584678" watchObservedRunningTime="2026-02-17 15:47:13.714266871 +0000 UTC m=+1895.605990689" Feb 17 15:47:14.354691 master-0 kubenswrapper[26425]: I0217 15:47:14.354568 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:14.417472 master-0 kubenswrapper[26425]: I0217 15:47:14.417410 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa28629-c245-4065-bd98-c76f7a98206c" path="/var/lib/kubelet/pods/6aa28629-c245-4065-bd98-c76f7a98206c/volumes" Feb 17 15:47:14.583153 master-0 kubenswrapper[26425]: I0217 15:47:14.582225 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerStarted","Data":"15801b47cfb7a0d53554af977658ddff8f9471db68d684526a5ea6cd4d82e176"} Feb 17 15:47:15.497984 master-0 kubenswrapper[26425]: I0217 15:47:15.497916 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:15.705644 master-0 kubenswrapper[26425]: I0217 15:47:15.696796 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7jqwh"] Feb 17 15:47:15.712956 master-0 kubenswrapper[26425]: I0217 15:47:15.711114 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7jqwh"] Feb 17 15:47:15.723702 master-0 kubenswrapper[26425]: I0217 15:47:15.722252 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lc5mm"] Feb 17 15:47:15.723702 master-0 kubenswrapper[26425]: E0217 15:47:15.722779 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac08fd-9e1a-4c05-9293-7805453eb135" containerName="keystone-bootstrap" Feb 17 15:47:15.723702 master-0 kubenswrapper[26425]: I0217 15:47:15.722797 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac08fd-9e1a-4c05-9293-7805453eb135" containerName="keystone-bootstrap" Feb 17 15:47:15.723702 master-0 kubenswrapper[26425]: I0217 15:47:15.723181 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac08fd-9e1a-4c05-9293-7805453eb135" containerName="keystone-bootstrap" Feb 17 15:47:15.723995 master-0 kubenswrapper[26425]: I0217 15:47:15.723876 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.728263 master-0 kubenswrapper[26425]: I0217 15:47:15.725669 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 15:47:15.728263 master-0 kubenswrapper[26425]: I0217 15:47:15.726947 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 15:47:15.728263 master-0 kubenswrapper[26425]: I0217 15:47:15.727089 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 15:47:15.728263 master-0 kubenswrapper[26425]: I0217 15:47:15.727242 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 15:47:15.768577 master-0 kubenswrapper[26425]: I0217 15:47:15.768506 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lc5mm"] Feb 17 15:47:15.876273 master-0 kubenswrapper[26425]: I0217 15:47:15.876233 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppgwh\" (UniqueName: \"kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.876535 master-0 kubenswrapper[26425]: I0217 15:47:15.876518 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.876880 master-0 kubenswrapper[26425]: I0217 15:47:15.876826 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.877018 master-0 kubenswrapper[26425]: I0217 15:47:15.876973 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.877222 master-0 kubenswrapper[26425]: I0217 15:47:15.877196 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.877471 master-0 kubenswrapper[26425]: I0217 15:47:15.877434 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.979971 master-0 kubenswrapper[26425]: I0217 15:47:15.979851 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.979971 master-0 kubenswrapper[26425]: I0217 15:47:15.979927 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppgwh\" (UniqueName: \"kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.979971 master-0 kubenswrapper[26425]: I0217 15:47:15.979950 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.980210 master-0 kubenswrapper[26425]: I0217 15:47:15.980017 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.980210 master-0 kubenswrapper[26425]: I0217 15:47:15.980041 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.980210 master-0 kubenswrapper[26425]: I0217 15:47:15.980091 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.984685 master-0 kubenswrapper[26425]: I0217 15:47:15.984646 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.984966 master-0 kubenswrapper[26425]: I0217 15:47:15.984921 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.986671 master-0 kubenswrapper[26425]: I0217 15:47:15.986649 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.990969 master-0 kubenswrapper[26425]: I0217 15:47:15.987783 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-8zl8z"] Feb 17 15:47:15.990969 master-0 kubenswrapper[26425]: I0217 15:47:15.989086 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:15.990969 master-0 kubenswrapper[26425]: I0217 15:47:15.990160 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:15.994042 master-0 kubenswrapper[26425]: I0217 15:47:15.992885 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 17 15:47:15.994042 master-0 kubenswrapper[26425]: I0217 15:47:15.992953 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 17 15:47:16.003279 master-0 kubenswrapper[26425]: I0217 15:47:16.003220 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppgwh\" (UniqueName: \"kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:16.020727 master-0 kubenswrapper[26425]: I0217 15:47:16.020674 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-8zl8z"] Feb 17 15:47:16.021498 master-0 kubenswrapper[26425]: I0217 15:47:16.021480 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts\") pod \"keystone-bootstrap-lc5mm\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:16.048607 master-0 kubenswrapper[26425]: I0217 15:47:16.048556 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:16.083204 master-0 kubenswrapper[26425]: I0217 15:47:16.083146 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.083532 master-0 kubenswrapper[26425]: I0217 15:47:16.083498 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.083670 master-0 kubenswrapper[26425]: I0217 15:47:16.083655 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.083809 master-0 kubenswrapper[26425]: I0217 15:47:16.083792 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkpf\" (UniqueName: \"kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.083931 master-0 kubenswrapper[26425]: I0217 15:47:16.083918 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.084237 master-0 kubenswrapper[26425]: I0217 15:47:16.084217 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186468 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186626 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186656 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186686 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186726 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkpf\" (UniqueName: \"kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.186879 master-0 kubenswrapper[26425]: I0217 15:47:16.186758 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.188563 master-0 kubenswrapper[26425]: I0217 15:47:16.188058 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.189949 master-0 kubenswrapper[26425]: I0217 15:47:16.189910 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.190188 master-0 kubenswrapper[26425]: I0217 15:47:16.190090 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.196590 master-0 kubenswrapper[26425]: I0217 15:47:16.190976 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.196590 master-0 kubenswrapper[26425]: I0217 15:47:16.191823 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.204859 master-0 kubenswrapper[26425]: I0217 15:47:16.204768 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkpf\" (UniqueName: \"kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf\") pod \"ironic-db-sync-8zl8z\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.407775 master-0 kubenswrapper[26425]: I0217 15:47:16.407707 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:16.408790 master-0 kubenswrapper[26425]: I0217 15:47:16.408746 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ac08fd-9e1a-4c05-9293-7805453eb135" path="/var/lib/kubelet/pods/b8ac08fd-9e1a-4c05-9293-7805453eb135/volumes" Feb 17 15:47:16.633158 master-0 kubenswrapper[26425]: I0217 15:47:16.633098 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerStarted","Data":"3b3ebec1c2e6e4204d4e1cecb8899d580c3baf1dd7f05ccef4f4a4a27dd8fd3d"} Feb 17 15:47:16.674619 master-0 kubenswrapper[26425]: I0217 15:47:16.674493 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7b9c2-default-internal-api-0" podStartSLOduration=7.674446901 podStartE2EDuration="7.674446901s" podCreationTimestamp="2026-02-17 15:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:16.659256536 +0000 UTC m=+1898.550980364" watchObservedRunningTime="2026-02-17 15:47:16.674446901 +0000 UTC m=+1898.566170739" Feb 17 15:47:17.343711 master-0 kubenswrapper[26425]: I0217 15:47:17.343642 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:47:17.446242 master-0 kubenswrapper[26425]: I0217 15:47:17.446140 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:47:17.446931 master-0 kubenswrapper[26425]: I0217 15:47:17.446757 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" containerID="cri-o://d7c5a0004f88dfefdf7a1cd8d432d61e23341a1ea2ba41fcbe5a0664a569bdeb" gracePeriod=10 Feb 17 15:47:17.656488 master-0 kubenswrapper[26425]: I0217 15:47:17.656388 26425 generic.go:334] "Generic (PLEG): container finished" podID="6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" containerID="bf032fea4616276011ffea11f209a24cc83f37fe4050b2355b7b86308ef6a20a" exitCode=0 Feb 17 15:47:17.656713 master-0 kubenswrapper[26425]: I0217 15:47:17.656495 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgjmt" event={"ID":"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c","Type":"ContainerDied","Data":"bf032fea4616276011ffea11f209a24cc83f37fe4050b2355b7b86308ef6a20a"} Feb 17 15:47:17.659129 master-0 kubenswrapper[26425]: I0217 15:47:17.659075 26425 generic.go:334] "Generic (PLEG): container finished" podID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerID="d7c5a0004f88dfefdf7a1cd8d432d61e23341a1ea2ba41fcbe5a0664a569bdeb" exitCode=0 Feb 17 15:47:17.659220 master-0 kubenswrapper[26425]: I0217 15:47:17.659138 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" event={"ID":"e6b74389-6837-4f8d-8bd0-874f966d48cc","Type":"ContainerDied","Data":"d7c5a0004f88dfefdf7a1cd8d432d61e23341a1ea2ba41fcbe5a0664a569bdeb"} Feb 17 15:47:17.861047 master-0 kubenswrapper[26425]: I0217 15:47:17.860946 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.208:5353: connect: connection refused" Feb 17 15:47:22.370487 master-0 kubenswrapper[26425]: I0217 15:47:22.370375 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.370487 master-0 kubenswrapper[26425]: I0217 15:47:22.370423 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.410194 master-0 kubenswrapper[26425]: I0217 15:47:22.410125 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.413326 master-0 kubenswrapper[26425]: I0217 15:47:22.413274 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.719965 master-0 kubenswrapper[26425]: I0217 15:47:22.719839 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.720699 master-0 kubenswrapper[26425]: I0217 15:47:22.720649 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:22.861473 master-0 kubenswrapper[26425]: I0217 15:47:22.861395 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.208:5353: connect: connection refused" Feb 17 15:47:23.573909 master-0 kubenswrapper[26425]: I0217 15:47:23.573495 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:23.665672 master-0 kubenswrapper[26425]: I0217 15:47:23.665604 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs\") pod \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " Feb 17 15:47:23.665889 master-0 kubenswrapper[26425]: I0217 15:47:23.665693 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntfcg\" (UniqueName: \"kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg\") pod \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " Feb 17 15:47:23.665889 master-0 kubenswrapper[26425]: I0217 15:47:23.665813 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data\") pod \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " Feb 17 15:47:23.666513 master-0 kubenswrapper[26425]: I0217 15:47:23.666026 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle\") pod \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " Feb 17 15:47:23.666513 master-0 kubenswrapper[26425]: I0217 15:47:23.666038 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs" (OuterVolumeSpecName: "logs") pod "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" (UID: "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:47:23.666513 master-0 kubenswrapper[26425]: I0217 15:47:23.666147 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts\") pod \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\" (UID: \"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c\") " Feb 17 15:47:23.666850 master-0 kubenswrapper[26425]: I0217 15:47:23.666818 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:23.670127 master-0 kubenswrapper[26425]: I0217 15:47:23.670021 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts" (OuterVolumeSpecName: "scripts") pod "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" (UID: "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:23.670946 master-0 kubenswrapper[26425]: I0217 15:47:23.670878 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg" (OuterVolumeSpecName: "kube-api-access-ntfcg") pod "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" (UID: "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c"). InnerVolumeSpecName "kube-api-access-ntfcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:23.695338 master-0 kubenswrapper[26425]: I0217 15:47:23.695239 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" (UID: "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:23.719360 master-0 kubenswrapper[26425]: I0217 15:47:23.719206 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data" (OuterVolumeSpecName: "config-data") pod "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" (UID: "6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:23.759265 master-0 kubenswrapper[26425]: I0217 15:47:23.743096 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgjmt" event={"ID":"6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c","Type":"ContainerDied","Data":"4202859d502f0c80aa3da89f0df7f218754692be139c3d47ea0460e13c17414d"} Feb 17 15:47:23.759265 master-0 kubenswrapper[26425]: I0217 15:47:23.743147 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgjmt" Feb 17 15:47:23.759265 master-0 kubenswrapper[26425]: I0217 15:47:23.743178 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4202859d502f0c80aa3da89f0df7f218754692be139c3d47ea0460e13c17414d" Feb 17 15:47:23.778604 master-0 kubenswrapper[26425]: I0217 15:47:23.778415 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntfcg\" (UniqueName: \"kubernetes.io/projected/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-kube-api-access-ntfcg\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:23.778604 master-0 kubenswrapper[26425]: I0217 15:47:23.778502 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:23.778604 master-0 kubenswrapper[26425]: I0217 15:47:23.778525 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:23.778604 master-0 kubenswrapper[26425]: I0217 15:47:23.778543 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:24.236393 master-0 kubenswrapper[26425]: I0217 15:47:24.236288 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:47:24.757788 master-0 kubenswrapper[26425]: I0217 15:47:24.757703 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:47:24.757788 master-0 kubenswrapper[26425]: I0217 15:47:24.757762 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:47:24.778471 master-0 kubenswrapper[26425]: I0217 15:47:24.778399 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:24.779905 master-0 kubenswrapper[26425]: I0217 15:47:24.779874 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:47:24.827480 master-0 kubenswrapper[26425]: W0217 15:47:24.827399 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46e17198_94a2_469f_8d1c_34138a1e2420.slice/crio-04d6e64ccbb04c93923a9a322da93072c46f1453e0f2ea638d620379af884415 WatchSource:0}: Error finding container 04d6e64ccbb04c93923a9a322da93072c46f1453e0f2ea638d620379af884415: Status 404 returned error can't find the container with id 04d6e64ccbb04c93923a9a322da93072c46f1453e0f2ea638d620379af884415 Feb 17 15:47:25.137037 master-0 kubenswrapper[26425]: I0217 15:47:25.133347 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.232957 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: E0217 15:47:25.233358 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.233371 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: E0217 15:47:25.233411 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" containerName="placement-db-sync" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.233417 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" containerName="placement-db-sync" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: E0217 15:47:25.233467 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="init" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.233474 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="init" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.233864 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" containerName="dnsmasq-dns" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.233909 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" containerName="placement-db-sync" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.235183 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.238424 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.238595 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 15:47:25.239153 master-0 kubenswrapper[26425]: I0217 15:47:25.238901 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.240804 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplz9\" (UniqueName: \"kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.240835 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.240972 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.241054 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.241097 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245155 master-0 kubenswrapper[26425]: I0217 15:47:25.241119 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config\") pod \"e6b74389-6837-4f8d-8bd0-874f966d48cc\" (UID: \"e6b74389-6837-4f8d-8bd0-874f966d48cc\") " Feb 17 15:47:25.245706 master-0 kubenswrapper[26425]: I0217 15:47:25.245273 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:47:25.248924 master-0 kubenswrapper[26425]: I0217 15:47:25.246832 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 15:47:25.262608 master-0 kubenswrapper[26425]: I0217 15:47:25.257410 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9" (OuterVolumeSpecName: "kube-api-access-rplz9") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "kube-api-access-rplz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:25.339753 master-0 kubenswrapper[26425]: I0217 15:47:25.339528 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:25.339753 master-0 kubenswrapper[26425]: I0217 15:47:25.339558 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:25.341484 master-0 kubenswrapper[26425]: I0217 15:47:25.341325 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config" (OuterVolumeSpecName: "config") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:25.343785 master-0 kubenswrapper[26425]: I0217 15:47:25.343379 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344175 master-0 kubenswrapper[26425]: I0217 15:47:25.344001 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344175 master-0 kubenswrapper[26425]: I0217 15:47:25.344039 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344175 master-0 kubenswrapper[26425]: I0217 15:47:25.344092 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344387 master-0 kubenswrapper[26425]: I0217 15:47:25.344333 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344585 master-0 kubenswrapper[26425]: I0217 15:47:25.344548 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344659 master-0 kubenswrapper[26425]: I0217 15:47:25.344642 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tgp\" (UniqueName: \"kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.344850 master-0 kubenswrapper[26425]: I0217 15:47:25.344817 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.344850 master-0 kubenswrapper[26425]: I0217 15:47:25.344840 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.344962 master-0 kubenswrapper[26425]: I0217 15:47:25.344855 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplz9\" (UniqueName: \"kubernetes.io/projected/e6b74389-6837-4f8d-8bd0-874f966d48cc-kube-api-access-rplz9\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.344962 master-0 kubenswrapper[26425]: I0217 15:47:25.344869 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.384565 master-0 kubenswrapper[26425]: I0217 15:47:25.384210 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lc5mm"] Feb 17 15:47:25.397119 master-0 kubenswrapper[26425]: I0217 15:47:25.397062 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:25.409161 master-0 kubenswrapper[26425]: W0217 15:47:25.409095 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41ba4c9_1c82_4a6d_8593_1c6abfdd98e8.slice/crio-bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b WatchSource:0}: Error finding container bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b: Status 404 returned error can't find the container with id bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b Feb 17 15:47:25.417963 master-0 kubenswrapper[26425]: I0217 15:47:25.417898 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6b74389-6837-4f8d-8bd0-874f966d48cc" (UID: "e6b74389-6837-4f8d-8bd0-874f966d48cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:25.446194 master-0 kubenswrapper[26425]: I0217 15:47:25.446143 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446379 master-0 kubenswrapper[26425]: I0217 15:47:25.446249 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446379 master-0 kubenswrapper[26425]: I0217 15:47:25.446276 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446379 master-0 kubenswrapper[26425]: I0217 15:47:25.446315 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446379 master-0 kubenswrapper[26425]: I0217 15:47:25.446372 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446546 master-0 kubenswrapper[26425]: I0217 15:47:25.446426 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446546 master-0 kubenswrapper[26425]: I0217 15:47:25.446476 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tgp\" (UniqueName: \"kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.446615 master-0 kubenswrapper[26425]: I0217 15:47:25.446560 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.446615 master-0 kubenswrapper[26425]: I0217 15:47:25.446573 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6b74389-6837-4f8d-8bd0-874f966d48cc-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:25.446682 master-0 kubenswrapper[26425]: I0217 15:47:25.446607 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.450451 master-0 kubenswrapper[26425]: I0217 15:47:25.449649 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.450451 master-0 kubenswrapper[26425]: I0217 15:47:25.449963 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.450451 master-0 kubenswrapper[26425]: I0217 15:47:25.450102 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.452509 master-0 kubenswrapper[26425]: I0217 15:47:25.452255 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.454103 master-0 kubenswrapper[26425]: I0217 15:47:25.454059 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.466623 master-0 kubenswrapper[26425]: I0217 15:47:25.466574 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tgp\" (UniqueName: \"kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp\") pod \"placement-5b57c6d9b6-frt4v\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.599113 master-0 kubenswrapper[26425]: I0217 15:47:25.599051 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-8zl8z"] Feb 17 15:47:25.609762 master-0 kubenswrapper[26425]: W0217 15:47:25.609699 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87f5e945_543a_4858_b5f8_7e33a1a22459.slice/crio-90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0 WatchSource:0}: Error finding container 90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0: Status 404 returned error can't find the container with id 90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0 Feb 17 15:47:25.613564 master-0 kubenswrapper[26425]: I0217 15:47:25.613518 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:25.776781 master-0 kubenswrapper[26425]: I0217 15:47:25.776501 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lc5mm" event={"ID":"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8","Type":"ContainerStarted","Data":"fd494964f17c0ce9f11b48b19939bd72bf4d96393dd2f5fec9ef7a6dec8aa69f"} Feb 17 15:47:25.776781 master-0 kubenswrapper[26425]: I0217 15:47:25.776563 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lc5mm" event={"ID":"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8","Type":"ContainerStarted","Data":"bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b"} Feb 17 15:47:25.779424 master-0 kubenswrapper[26425]: I0217 15:47:25.779372 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" event={"ID":"e6b74389-6837-4f8d-8bd0-874f966d48cc","Type":"ContainerDied","Data":"f8b54589142a097bc70c55016e38d2e4c6486090469bd4fde912a11fec092ab3"} Feb 17 15:47:25.779424 master-0 kubenswrapper[26425]: I0217 15:47:25.779424 26425 scope.go:117] "RemoveContainer" containerID="d7c5a0004f88dfefdf7a1cd8d432d61e23341a1ea2ba41fcbe5a0664a569bdeb" Feb 17 15:47:25.779627 master-0 kubenswrapper[26425]: I0217 15:47:25.779575 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-bfcw7" Feb 17 15:47:25.799893 master-0 kubenswrapper[26425]: I0217 15:47:25.793235 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerStarted","Data":"f27f0c55b8344662ca7b8b23e847884c8141558a6fca1bc3149e931153c0e3fd"} Feb 17 15:47:25.799893 master-0 kubenswrapper[26425]: I0217 15:47:25.793289 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerStarted","Data":"04d6e64ccbb04c93923a9a322da93072c46f1453e0f2ea638d620379af884415"} Feb 17 15:47:25.813562 master-0 kubenswrapper[26425]: I0217 15:47:25.807599 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lc5mm" podStartSLOduration=10.807571517 podStartE2EDuration="10.807571517s" podCreationTimestamp="2026-02-17 15:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:25.801352749 +0000 UTC m=+1907.693076577" watchObservedRunningTime="2026-02-17 15:47:25.807571517 +0000 UTC m=+1907.699295335" Feb 17 15:47:25.821864 master-0 kubenswrapper[26425]: I0217 15:47:25.821752 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-8zl8z" event={"ID":"87f5e945-543a-4858-b5f8-7e33a1a22459","Type":"ContainerStarted","Data":"90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0"} Feb 17 15:47:25.863487 master-0 kubenswrapper[26425]: I0217 15:47:25.862250 26425 scope.go:117] "RemoveContainer" containerID="7cd0a0a009215c0c696273206b368632b3b50eaf1ea60119ece9b4c362a12348" Feb 17 15:47:26.016001 master-0 kubenswrapper[26425]: I0217 15:47:26.015927 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:47:26.052696 master-0 kubenswrapper[26425]: I0217 15:47:26.052622 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-bfcw7"] Feb 17 15:47:26.128256 master-0 kubenswrapper[26425]: I0217 15:47:26.128174 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:47:26.144273 master-0 kubenswrapper[26425]: W0217 15:47:26.144210 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595c3aef_36e6_4a07_ad78_32535353193d.slice/crio-136586aeefac49fc012fd9245bf9ef96c0bb04b87aa7754648c745beaf013e25 WatchSource:0}: Error finding container 136586aeefac49fc012fd9245bf9ef96c0bb04b87aa7754648c745beaf013e25: Status 404 returned error can't find the container with id 136586aeefac49fc012fd9245bf9ef96c0bb04b87aa7754648c745beaf013e25 Feb 17 15:47:26.409945 master-0 kubenswrapper[26425]: I0217 15:47:26.409901 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b74389-6837-4f8d-8bd0-874f966d48cc" path="/var/lib/kubelet/pods/e6b74389-6837-4f8d-8bd0-874f966d48cc/volumes" Feb 17 15:47:26.855262 master-0 kubenswrapper[26425]: I0217 15:47:26.855161 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerStarted","Data":"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a"} Feb 17 15:47:26.855262 master-0 kubenswrapper[26425]: I0217 15:47:26.855256 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerStarted","Data":"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26"} Feb 17 15:47:26.855262 master-0 kubenswrapper[26425]: I0217 15:47:26.855274 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerStarted","Data":"136586aeefac49fc012fd9245bf9ef96c0bb04b87aa7754648c745beaf013e25"} Feb 17 15:47:26.856096 master-0 kubenswrapper[26425]: I0217 15:47:26.856056 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:26.856152 master-0 kubenswrapper[26425]: I0217 15:47:26.856119 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:26.861525 master-0 kubenswrapper[26425]: I0217 15:47:26.861448 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-db-sync-smx72" event={"ID":"92cdc0bf-17bd-4554-811c-89cf8bc1a52c","Type":"ContainerStarted","Data":"a15a54697c7f6c47d74e54fb83f72ae7373426b68f32d427761340ab4a7267a5"} Feb 17 15:47:26.868003 master-0 kubenswrapper[26425]: I0217 15:47:26.867961 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerStarted","Data":"f9143206c8ce19f8473ed081cb0ee830a0bc1a6768a30c3ffb31795ad772d91f"} Feb 17 15:47:27.873798 master-0 kubenswrapper[26425]: I0217 15:47:27.873654 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b57c6d9b6-frt4v" podStartSLOduration=2.8736278950000003 podStartE2EDuration="2.873627895s" podCreationTimestamp="2026-02-17 15:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:27.860341106 +0000 UTC m=+1909.752064954" watchObservedRunningTime="2026-02-17 15:47:27.873627895 +0000 UTC m=+1909.765351903" Feb 17 15:47:28.161411 master-0 kubenswrapper[26425]: I0217 15:47:28.160837 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-db-sync-smx72" podStartSLOduration=6.428243748 podStartE2EDuration="24.160814822s" podCreationTimestamp="2026-02-17 15:47:04 +0000 UTC" firstStartedPulling="2026-02-17 15:47:07.265961861 +0000 UTC m=+1889.157685679" lastFinishedPulling="2026-02-17 15:47:24.998532945 +0000 UTC m=+1906.890256753" observedRunningTime="2026-02-17 15:47:28.146935489 +0000 UTC m=+1910.038659307" watchObservedRunningTime="2026-02-17 15:47:28.160814822 +0000 UTC m=+1910.052538650" Feb 17 15:47:28.301913 master-0 kubenswrapper[26425]: I0217 15:47:28.301770 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7b9c2-default-external-api-0" podStartSLOduration=16.301736562 podStartE2EDuration="16.301736562s" podCreationTimestamp="2026-02-17 15:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:28.29957402 +0000 UTC m=+1910.191297858" watchObservedRunningTime="2026-02-17 15:47:28.301736562 +0000 UTC m=+1910.193460400" Feb 17 15:47:28.897727 master-0 kubenswrapper[26425]: I0217 15:47:28.897420 26425 generic.go:334] "Generic (PLEG): container finished" podID="d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" containerID="fd494964f17c0ce9f11b48b19939bd72bf4d96393dd2f5fec9ef7a6dec8aa69f" exitCode=0 Feb 17 15:47:28.897727 master-0 kubenswrapper[26425]: I0217 15:47:28.897491 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lc5mm" event={"ID":"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8","Type":"ContainerDied","Data":"fd494964f17c0ce9f11b48b19939bd72bf4d96393dd2f5fec9ef7a6dec8aa69f"} Feb 17 15:47:33.959407 master-0 kubenswrapper[26425]: I0217 15:47:33.955522 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lc5mm" event={"ID":"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8","Type":"ContainerDied","Data":"bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b"} Feb 17 15:47:33.959407 master-0 kubenswrapper[26425]: I0217 15:47:33.955620 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe8a9d8446c62269b3800c0abe879cda90e67cb2e146e258c2523590758797b" Feb 17 15:47:34.042422 master-0 kubenswrapper[26425]: I0217 15:47:34.042366 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:34.113982 master-0 kubenswrapper[26425]: I0217 15:47:34.113889 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.114347 master-0 kubenswrapper[26425]: I0217 15:47:34.114313 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.114940 master-0 kubenswrapper[26425]: I0217 15:47:34.114469 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.115215 master-0 kubenswrapper[26425]: I0217 15:47:34.115183 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.115497 master-0 kubenswrapper[26425]: I0217 15:47:34.115476 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.115665 master-0 kubenswrapper[26425]: I0217 15:47:34.115647 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppgwh\" (UniqueName: \"kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh\") pod \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\" (UID: \"d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8\") " Feb 17 15:47:34.120283 master-0 kubenswrapper[26425]: I0217 15:47:34.120225 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:34.126054 master-0 kubenswrapper[26425]: I0217 15:47:34.126000 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:34.126234 master-0 kubenswrapper[26425]: I0217 15:47:34.126049 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh" (OuterVolumeSpecName: "kube-api-access-ppgwh") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "kube-api-access-ppgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:34.129122 master-0 kubenswrapper[26425]: I0217 15:47:34.129057 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts" (OuterVolumeSpecName: "scripts") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:34.160448 master-0 kubenswrapper[26425]: I0217 15:47:34.160294 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:34.183656 master-0 kubenswrapper[26425]: I0217 15:47:34.183589 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data" (OuterVolumeSpecName: "config-data") pod "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" (UID: "d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:34.218987 master-0 kubenswrapper[26425]: I0217 15:47:34.218909 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.218987 master-0 kubenswrapper[26425]: I0217 15:47:34.218968 26425 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.218987 master-0 kubenswrapper[26425]: I0217 15:47:34.218984 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppgwh\" (UniqueName: \"kubernetes.io/projected/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-kube-api-access-ppgwh\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.218987 master-0 kubenswrapper[26425]: I0217 15:47:34.218999 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.219316 master-0 kubenswrapper[26425]: I0217 15:47:34.219013 26425 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.219316 master-0 kubenswrapper[26425]: I0217 15:47:34.219027 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:34.979292 master-0 kubenswrapper[26425]: I0217 15:47:34.979229 26425 generic.go:334] "Generic (PLEG): container finished" podID="92cdc0bf-17bd-4554-811c-89cf8bc1a52c" containerID="a15a54697c7f6c47d74e54fb83f72ae7373426b68f32d427761340ab4a7267a5" exitCode=0 Feb 17 15:47:34.979845 master-0 kubenswrapper[26425]: I0217 15:47:34.979329 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-db-sync-smx72" event={"ID":"92cdc0bf-17bd-4554-811c-89cf8bc1a52c","Type":"ContainerDied","Data":"a15a54697c7f6c47d74e54fb83f72ae7373426b68f32d427761340ab4a7267a5"} Feb 17 15:47:34.979845 master-0 kubenswrapper[26425]: I0217 15:47:34.979371 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lc5mm" Feb 17 15:47:35.231013 master-0 kubenswrapper[26425]: I0217 15:47:35.229600 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f77fccc4f-8svgt"] Feb 17 15:47:35.231013 master-0 kubenswrapper[26425]: E0217 15:47:35.230073 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" containerName="keystone-bootstrap" Feb 17 15:47:35.231013 master-0 kubenswrapper[26425]: I0217 15:47:35.230086 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" containerName="keystone-bootstrap" Feb 17 15:47:35.231013 master-0 kubenswrapper[26425]: I0217 15:47:35.230562 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" containerName="keystone-bootstrap" Feb 17 15:47:35.231497 master-0 kubenswrapper[26425]: I0217 15:47:35.231406 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.233968 master-0 kubenswrapper[26425]: I0217 15:47:35.233933 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 15:47:35.234512 master-0 kubenswrapper[26425]: I0217 15:47:35.234436 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 15:47:35.234762 master-0 kubenswrapper[26425]: I0217 15:47:35.234748 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 15:47:35.235416 master-0 kubenswrapper[26425]: I0217 15:47:35.235369 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 15:47:35.235999 master-0 kubenswrapper[26425]: I0217 15:47:35.235985 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 15:47:35.261215 master-0 kubenswrapper[26425]: I0217 15:47:35.261169 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f77fccc4f-8svgt"] Feb 17 15:47:35.339174 master-0 kubenswrapper[26425]: I0217 15:47:35.339095 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8snm\" (UniqueName: \"kubernetes.io/projected/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-kube-api-access-j8snm\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339401 master-0 kubenswrapper[26425]: I0217 15:47:35.339201 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-public-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339401 master-0 kubenswrapper[26425]: I0217 15:47:35.339291 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-combined-ca-bundle\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339401 master-0 kubenswrapper[26425]: I0217 15:47:35.339343 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-internal-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339566 master-0 kubenswrapper[26425]: I0217 15:47:35.339434 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-fernet-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339566 master-0 kubenswrapper[26425]: I0217 15:47:35.339494 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-scripts\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339751 master-0 kubenswrapper[26425]: I0217 15:47:35.339702 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-credential-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.339833 master-0 kubenswrapper[26425]: I0217 15:47:35.339797 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-config-data\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.441795 master-0 kubenswrapper[26425]: I0217 15:47:35.441735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8snm\" (UniqueName: \"kubernetes.io/projected/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-kube-api-access-j8snm\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.442036 master-0 kubenswrapper[26425]: I0217 15:47:35.441824 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-public-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.442208 master-0 kubenswrapper[26425]: I0217 15:47:35.442170 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-combined-ca-bundle\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.442287 master-0 kubenswrapper[26425]: I0217 15:47:35.442266 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-internal-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.442590 master-0 kubenswrapper[26425]: I0217 15:47:35.442564 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-fernet-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.442655 master-0 kubenswrapper[26425]: I0217 15:47:35.442619 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-scripts\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.443594 master-0 kubenswrapper[26425]: I0217 15:47:35.442768 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-credential-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.443594 master-0 kubenswrapper[26425]: I0217 15:47:35.442818 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-config-data\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.446466 master-0 kubenswrapper[26425]: I0217 15:47:35.446225 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-scripts\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.448225 master-0 kubenswrapper[26425]: I0217 15:47:35.448172 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-config-data\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.451528 master-0 kubenswrapper[26425]: I0217 15:47:35.450350 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-public-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.451528 master-0 kubenswrapper[26425]: I0217 15:47:35.450533 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-fernet-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.451528 master-0 kubenswrapper[26425]: I0217 15:47:35.450604 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-credential-keys\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.451528 master-0 kubenswrapper[26425]: I0217 15:47:35.450805 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-internal-tls-certs\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.460149 master-0 kubenswrapper[26425]: I0217 15:47:35.460086 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-combined-ca-bundle\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.461057 master-0 kubenswrapper[26425]: I0217 15:47:35.461011 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8snm\" (UniqueName: \"kubernetes.io/projected/14f9fe2f-cc1d-4846-81e6-c8d9d2dac345-kube-api-access-j8snm\") pod \"keystone-7f77fccc4f-8svgt\" (UID: \"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345\") " pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:35.498957 master-0 kubenswrapper[26425]: I0217 15:47:35.498840 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:35.500612 master-0 kubenswrapper[26425]: I0217 15:47:35.500588 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:35.534062 master-0 kubenswrapper[26425]: I0217 15:47:35.533997 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:35.544398 master-0 kubenswrapper[26425]: I0217 15:47:35.544229 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:35.575710 master-0 kubenswrapper[26425]: I0217 15:47:35.575618 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:36.006807 master-0 kubenswrapper[26425]: I0217 15:47:36.006725 26425 generic.go:334] "Generic (PLEG): container finished" podID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerID="58831b6af58318199993d4aab760283e53af16aeeed02ff18022ebff88e51d30" exitCode=0 Feb 17 15:47:36.009149 master-0 kubenswrapper[26425]: I0217 15:47:36.007896 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-8zl8z" event={"ID":"87f5e945-543a-4858-b5f8-7e33a1a22459","Type":"ContainerDied","Data":"58831b6af58318199993d4aab760283e53af16aeeed02ff18022ebff88e51d30"} Feb 17 15:47:36.009149 master-0 kubenswrapper[26425]: I0217 15:47:36.008078 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:36.009149 master-0 kubenswrapper[26425]: I0217 15:47:36.008097 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:36.112640 master-0 kubenswrapper[26425]: I0217 15:47:36.112489 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f77fccc4f-8svgt"] Feb 17 15:47:36.469571 master-0 kubenswrapper[26425]: I0217 15:47:36.469501 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582562 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582642 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582688 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582734 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzlq9\" (UniqueName: \"kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582793 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.588483 master-0 kubenswrapper[26425]: I0217 15:47:36.582857 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle\") pod \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\" (UID: \"92cdc0bf-17bd-4554-811c-89cf8bc1a52c\") " Feb 17 15:47:36.597479 master-0 kubenswrapper[26425]: I0217 15:47:36.590811 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9" (OuterVolumeSpecName: "kube-api-access-vzlq9") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "kube-api-access-vzlq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:36.597479 master-0 kubenswrapper[26425]: I0217 15:47:36.594544 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:36.619547 master-0 kubenswrapper[26425]: I0217 15:47:36.616732 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts" (OuterVolumeSpecName: "scripts") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:36.634478 master-0 kubenswrapper[26425]: I0217 15:47:36.628599 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:36.662478 master-0 kubenswrapper[26425]: I0217 15:47:36.661627 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:36.688482 master-0 kubenswrapper[26425]: I0217 15:47:36.687916 26425 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:36.688482 master-0 kubenswrapper[26425]: I0217 15:47:36.687975 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:36.688482 master-0 kubenswrapper[26425]: I0217 15:47:36.687985 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzlq9\" (UniqueName: \"kubernetes.io/projected/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-kube-api-access-vzlq9\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:36.688482 master-0 kubenswrapper[26425]: I0217 15:47:36.687995 26425 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:36.688482 master-0 kubenswrapper[26425]: I0217 15:47:36.688012 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:36.692475 master-0 kubenswrapper[26425]: I0217 15:47:36.691123 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data" (OuterVolumeSpecName: "config-data") pod "92cdc0bf-17bd-4554-811c-89cf8bc1a52c" (UID: "92cdc0bf-17bd-4554-811c-89cf8bc1a52c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:36.790288 master-0 kubenswrapper[26425]: I0217 15:47:36.790236 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cdc0bf-17bd-4554-811c-89cf8bc1a52c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:37.020184 master-0 kubenswrapper[26425]: I0217 15:47:37.020038 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-db-sync-smx72" event={"ID":"92cdc0bf-17bd-4554-811c-89cf8bc1a52c","Type":"ContainerDied","Data":"418863b461328ba9361190919253798b5b06eac0e815837f9299ecf9d9141b2f"} Feb 17 15:47:37.020184 master-0 kubenswrapper[26425]: I0217 15:47:37.020092 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418863b461328ba9361190919253798b5b06eac0e815837f9299ecf9d9141b2f" Feb 17 15:47:37.020184 master-0 kubenswrapper[26425]: I0217 15:47:37.020060 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-db-sync-smx72" Feb 17 15:47:37.033729 master-0 kubenswrapper[26425]: I0217 15:47:37.033652 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f77fccc4f-8svgt" event={"ID":"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345","Type":"ContainerStarted","Data":"a6077235409d9d3dcf485849caa3f67b0530b14a06104c8aa5f7836469d54250"} Feb 17 15:47:37.033729 master-0 kubenswrapper[26425]: I0217 15:47:37.033713 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f77fccc4f-8svgt" event={"ID":"14f9fe2f-cc1d-4846-81e6-c8d9d2dac345","Type":"ContainerStarted","Data":"feb41798c609c3369e2233a5a175d5dda7a07ccec49a5b6ccced2049a08248c2"} Feb 17 15:47:37.034094 master-0 kubenswrapper[26425]: I0217 15:47:37.034060 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:47:37.042326 master-0 kubenswrapper[26425]: I0217 15:47:37.042265 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-8zl8z" event={"ID":"87f5e945-543a-4858-b5f8-7e33a1a22459","Type":"ContainerStarted","Data":"e11f1900c268d0c56f6662e9d2994680bba2a762c92975b43117920ae0e0c212"} Feb 17 15:47:37.070740 master-0 kubenswrapper[26425]: I0217 15:47:37.066395 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f77fccc4f-8svgt" podStartSLOduration=2.066364821 podStartE2EDuration="2.066364821s" podCreationTimestamp="2026-02-17 15:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:37.053973895 +0000 UTC m=+1918.945697763" watchObservedRunningTime="2026-02-17 15:47:37.066364821 +0000 UTC m=+1918.958088659" Feb 17 15:47:37.081156 master-0 kubenswrapper[26425]: I0217 15:47:37.081049 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-8zl8z" podStartSLOduration=12.982058106 podStartE2EDuration="22.081032563s" podCreationTimestamp="2026-02-17 15:47:15 +0000 UTC" firstStartedPulling="2026-02-17 15:47:25.615106262 +0000 UTC m=+1907.506830080" lastFinishedPulling="2026-02-17 15:47:34.714080709 +0000 UTC m=+1916.605804537" observedRunningTime="2026-02-17 15:47:37.074701571 +0000 UTC m=+1918.966425429" watchObservedRunningTime="2026-02-17 15:47:37.081032563 +0000 UTC m=+1918.972756381" Feb 17 15:47:37.746971 master-0 kubenswrapper[26425]: I0217 15:47:37.733134 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:37.746971 master-0 kubenswrapper[26425]: E0217 15:47:37.733741 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cdc0bf-17bd-4554-811c-89cf8bc1a52c" containerName="cinder-04ef3-db-sync" Feb 17 15:47:37.746971 master-0 kubenswrapper[26425]: I0217 15:47:37.733765 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cdc0bf-17bd-4554-811c-89cf8bc1a52c" containerName="cinder-04ef3-db-sync" Feb 17 15:47:37.746971 master-0 kubenswrapper[26425]: I0217 15:47:37.734066 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cdc0bf-17bd-4554-811c-89cf8bc1a52c" containerName="cinder-04ef3-db-sync" Feb 17 15:47:37.746971 master-0 kubenswrapper[26425]: I0217 15:47:37.735611 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.747501 master-0 kubenswrapper[26425]: I0217 15:47:37.747321 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-scripts" Feb 17 15:47:37.752232 master-0 kubenswrapper[26425]: I0217 15:47:37.747556 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-scheduler-config-data" Feb 17 15:47:37.752232 master-0 kubenswrapper[26425]: I0217 15:47:37.747592 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-config-data" Feb 17 15:47:37.782429 master-0 kubenswrapper[26425]: I0217 15:47:37.775662 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:37.815027 master-0 kubenswrapper[26425]: I0217 15:47:37.814975 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.819738 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.820989 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2jv\" (UniqueName: \"kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.821144 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.821180 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.821276 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.821321 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.821477 master-0 kubenswrapper[26425]: I0217 15:47:37.821364 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.833480 master-0 kubenswrapper[26425]: I0217 15:47:37.828629 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-backup-config-data" Feb 17 15:47:37.886525 master-0 kubenswrapper[26425]: I0217 15:47:37.883790 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:37.886525 master-0 kubenswrapper[26425]: I0217 15:47:37.885679 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:37.890237 master-0 kubenswrapper[26425]: I0217 15:47:37.888995 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-volume-lvm-iscsi-config-data" Feb 17 15:47:37.910601 master-0 kubenswrapper[26425]: I0217 15:47:37.910529 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.923802 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.923875 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.923897 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.923921 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2jv\" (UniqueName: \"kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.923989 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924018 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924144 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924317 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924342 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924424 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924488 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924510 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924535 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924558 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5sk\" (UniqueName: \"kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924580 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924600 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924626 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924645 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924679 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924706 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.925837 master-0 kubenswrapper[26425]: I0217 15:47:37.924722 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:37.932098 master-0 kubenswrapper[26425]: I0217 15:47:37.928941 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.932098 master-0 kubenswrapper[26425]: I0217 15:47:37.928996 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.934522 master-0 kubenswrapper[26425]: I0217 15:47:37.933487 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.934522 master-0 kubenswrapper[26425]: I0217 15:47:37.934070 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.940009 master-0 kubenswrapper[26425]: I0217 15:47:37.938532 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:37.943068 master-0 kubenswrapper[26425]: I0217 15:47:37.943030 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.949789 master-0 kubenswrapper[26425]: I0217 15:47:37.949604 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2jv\" (UniqueName: \"kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv\") pod \"cinder-04ef3-scheduler-0\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:37.952498 master-0 kubenswrapper[26425]: I0217 15:47:37.952413 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:37.960717 master-0 kubenswrapper[26425]: I0217 15:47:37.960659 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.001836 master-0 kubenswrapper[26425]: I0217 15:47:37.988849 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:38.026728 master-0 kubenswrapper[26425]: I0217 15:47:38.026658 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026748 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026796 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026842 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026877 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026901 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026952 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sv84\" (UniqueName: \"kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.026978 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027005 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027031 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027053 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027079 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027103 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027124 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027156 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027190 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027212 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.027258 master-0 kubenswrapper[26425]: I0217 15:47:38.027247 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027279 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027302 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027329 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027347 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027363 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027380 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027399 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027431 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027449 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027486 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027528 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5sk\" (UniqueName: \"kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.028057 master-0 kubenswrapper[26425]: I0217 15:47:38.027562 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029079 master-0 kubenswrapper[26425]: I0217 15:47:38.029025 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029157 master-0 kubenswrapper[26425]: I0217 15:47:38.029132 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029209 master-0 kubenswrapper[26425]: I0217 15:47:38.029186 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029258 master-0 kubenswrapper[26425]: I0217 15:47:38.029216 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029499 master-0 kubenswrapper[26425]: I0217 15:47:38.029430 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029638 master-0 kubenswrapper[26425]: I0217 15:47:38.029602 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029638 master-0 kubenswrapper[26425]: I0217 15:47:38.029629 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029740 master-0 kubenswrapper[26425]: I0217 15:47:38.029661 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029740 master-0 kubenswrapper[26425]: I0217 15:47:38.029687 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.029828 master-0 kubenswrapper[26425]: I0217 15:47:38.029756 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.031174 master-0 kubenswrapper[26425]: I0217 15:47:38.031143 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.033219 master-0 kubenswrapper[26425]: I0217 15:47:38.033176 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.034187 master-0 kubenswrapper[26425]: I0217 15:47:38.034134 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.040628 master-0 kubenswrapper[26425]: I0217 15:47:38.038081 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.079836 master-0 kubenswrapper[26425]: I0217 15:47:38.076035 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5sk\" (UniqueName: \"kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk\") pod \"cinder-04ef3-backup-0\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.093985 master-0 kubenswrapper[26425]: I0217 15:47:38.090788 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:38.093985 master-0 kubenswrapper[26425]: I0217 15:47:38.092521 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.103361 master-0 kubenswrapper[26425]: I0217 15:47:38.103286 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:38.105465 master-0 kubenswrapper[26425]: I0217 15:47:38.105377 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:47:38.105465 master-0 kubenswrapper[26425]: I0217 15:47:38.105428 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:47:38.118066 master-0 kubenswrapper[26425]: I0217 15:47:38.117861 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-api-config-data" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.128995 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129083 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129162 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129225 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129325 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129367 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129393 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129419 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129517 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129569 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v8g\" (UniqueName: \"kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129605 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129638 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129675 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129705 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129759 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.129824 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130255 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130298 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130351 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130411 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130421 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130485 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130540 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130544 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130570 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130346 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130603 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130657 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130691 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.131672 master-0 kubenswrapper[26425]: I0217 15:47:38.130756 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sv84\" (UniqueName: \"kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.132867 master-0 kubenswrapper[26425]: I0217 15:47:38.132769 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.134635 master-0 kubenswrapper[26425]: I0217 15:47:38.133384 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.135765 master-0 kubenswrapper[26425]: I0217 15:47:38.135725 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:38.138501 master-0 kubenswrapper[26425]: I0217 15:47:38.138447 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.149143 master-0 kubenswrapper[26425]: I0217 15:47:38.149095 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.154764 master-0 kubenswrapper[26425]: I0217 15:47:38.154713 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sv84\" (UniqueName: \"kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.159704 master-0 kubenswrapper[26425]: I0217 15:47:38.159646 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:38.232899 master-0 kubenswrapper[26425]: I0217 15:47:38.232835 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233104 master-0 kubenswrapper[26425]: I0217 15:47:38.232934 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.233104 master-0 kubenswrapper[26425]: I0217 15:47:38.232972 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.233104 master-0 kubenswrapper[26425]: I0217 15:47:38.233086 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl22f\" (UniqueName: \"kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233254 master-0 kubenswrapper[26425]: I0217 15:47:38.233126 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233254 master-0 kubenswrapper[26425]: I0217 15:47:38.233169 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233254 master-0 kubenswrapper[26425]: I0217 15:47:38.233217 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233254 master-0 kubenswrapper[26425]: I0217 15:47:38.233241 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.233446 master-0 kubenswrapper[26425]: I0217 15:47:38.233280 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.233446 master-0 kubenswrapper[26425]: I0217 15:47:38.233366 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5v8g\" (UniqueName: \"kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.234780 master-0 kubenswrapper[26425]: I0217 15:47:38.234720 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.235037 master-0 kubenswrapper[26425]: I0217 15:47:38.235012 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.235704 master-0 kubenswrapper[26425]: I0217 15:47:38.235662 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.235704 master-0 kubenswrapper[26425]: I0217 15:47:38.235676 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.235963 master-0 kubenswrapper[26425]: I0217 15:47:38.235704 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.235963 master-0 kubenswrapper[26425]: I0217 15:47:38.235757 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.238438 master-0 kubenswrapper[26425]: I0217 15:47:38.237363 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.238438 master-0 kubenswrapper[26425]: I0217 15:47:38.238196 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.252365 master-0 kubenswrapper[26425]: I0217 15:47:38.252231 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:38.252700 master-0 kubenswrapper[26425]: I0217 15:47:38.252483 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:47:38.261871 master-0 kubenswrapper[26425]: I0217 15:47:38.260630 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5v8g\" (UniqueName: \"kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g\") pod \"dnsmasq-dns-dd74dd7c9-jfb4s\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.323219 master-0 kubenswrapper[26425]: I0217 15:47:38.323138 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:38.352665 master-0 kubenswrapper[26425]: I0217 15:47:38.352590 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl22f\" (UniqueName: \"kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.352794 master-0 kubenswrapper[26425]: I0217 15:47:38.352702 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.352834 master-0 kubenswrapper[26425]: I0217 15:47:38.352786 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.352908 master-0 kubenswrapper[26425]: I0217 15:47:38.352877 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.352951 master-0 kubenswrapper[26425]: I0217 15:47:38.352912 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.353011 master-0 kubenswrapper[26425]: I0217 15:47:38.352990 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.353183 master-0 kubenswrapper[26425]: I0217 15:47:38.353140 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.356626 master-0 kubenswrapper[26425]: I0217 15:47:38.356579 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.358602 master-0 kubenswrapper[26425]: I0217 15:47:38.358557 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-api-config-data" Feb 17 15:47:38.360376 master-0 kubenswrapper[26425]: I0217 15:47:38.360330 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:38.360966 master-0 kubenswrapper[26425]: I0217 15:47:38.360933 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.363134 master-0 kubenswrapper[26425]: I0217 15:47:38.363019 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.367350 master-0 kubenswrapper[26425]: I0217 15:47:38.367224 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.370058 master-0 kubenswrapper[26425]: I0217 15:47:38.369949 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.373764 master-0 kubenswrapper[26425]: I0217 15:47:38.373377 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.373764 master-0 kubenswrapper[26425]: I0217 15:47:38.373648 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl22f\" (UniqueName: \"kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f\") pod \"cinder-04ef3-api-0\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:38.628046 master-0 kubenswrapper[26425]: I0217 15:47:38.628006 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:39.050397 master-0 kubenswrapper[26425]: I0217 15:47:39.049528 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:39.119234 master-0 kubenswrapper[26425]: I0217 15:47:39.119164 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerStarted","Data":"0079379ddf3926f5bb5c7a21f27f4cd75ee81dfa39b7c4abbf85091cff4f5c3d"} Feb 17 15:47:39.181020 master-0 kubenswrapper[26425]: I0217 15:47:39.180948 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:39.342531 master-0 kubenswrapper[26425]: W0217 15:47:39.341085 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a705f4_efff_4bbb_8609_7c418e5d83f6.slice/crio-b42b47c4bbb3bb521e57452f08a9e6d8a2d2f8f3caa8664689ed9a5030f89443 WatchSource:0}: Error finding container b42b47c4bbb3bb521e57452f08a9e6d8a2d2f8f3caa8664689ed9a5030f89443: Status 404 returned error can't find the container with id b42b47c4bbb3bb521e57452f08a9e6d8a2d2f8f3caa8664689ed9a5030f89443 Feb 17 15:47:39.349162 master-0 kubenswrapper[26425]: I0217 15:47:39.348773 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:39.516534 master-0 kubenswrapper[26425]: I0217 15:47:39.515964 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:39.516534 master-0 kubenswrapper[26425]: W0217 15:47:39.516134 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37674d2_6ddd_4640_9db2_3fded1c3c652.slice/crio-144519c22059cccd2904370836677a6b57bcad67e30787d0b758f4ea672bb812 WatchSource:0}: Error finding container 144519c22059cccd2904370836677a6b57bcad67e30787d0b758f4ea672bb812: Status 404 returned error can't find the container with id 144519c22059cccd2904370836677a6b57bcad67e30787d0b758f4ea672bb812 Feb 17 15:47:39.528300 master-0 kubenswrapper[26425]: I0217 15:47:39.528248 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:39.529607 master-0 kubenswrapper[26425]: W0217 15:47:39.529558 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod460894d4_6912_4bc2_b15c_434e02cea92b.slice/crio-a047f59c2d2664b35777bc6b2a883c0e92be08da23a31ebc678136be4d915110 WatchSource:0}: Error finding container a047f59c2d2664b35777bc6b2a883c0e92be08da23a31ebc678136be4d915110: Status 404 returned error can't find the container with id a047f59c2d2664b35777bc6b2a883c0e92be08da23a31ebc678136be4d915110 Feb 17 15:47:40.135623 master-0 kubenswrapper[26425]: I0217 15:47:40.135157 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerStarted","Data":"b42b47c4bbb3bb521e57452f08a9e6d8a2d2f8f3caa8664689ed9a5030f89443"} Feb 17 15:47:40.137553 master-0 kubenswrapper[26425]: I0217 15:47:40.137039 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerStarted","Data":"a047f59c2d2664b35777bc6b2a883c0e92be08da23a31ebc678136be4d915110"} Feb 17 15:47:40.139615 master-0 kubenswrapper[26425]: I0217 15:47:40.139568 26425 generic.go:334] "Generic (PLEG): container finished" podID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerID="827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e" exitCode=0 Feb 17 15:47:40.139888 master-0 kubenswrapper[26425]: I0217 15:47:40.139656 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" event={"ID":"b37674d2-6ddd-4640-9db2-3fded1c3c652","Type":"ContainerDied","Data":"827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e"} Feb 17 15:47:40.139888 master-0 kubenswrapper[26425]: I0217 15:47:40.139692 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" event={"ID":"b37674d2-6ddd-4640-9db2-3fded1c3c652","Type":"ContainerStarted","Data":"144519c22059cccd2904370836677a6b57bcad67e30787d0b758f4ea672bb812"} Feb 17 15:47:40.144775 master-0 kubenswrapper[26425]: I0217 15:47:40.144714 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerStarted","Data":"484cacdea4958be8eb839dfd2afb704c0a236af63e893b06193abb6982b9961a"} Feb 17 15:47:41.181626 master-0 kubenswrapper[26425]: I0217 15:47:41.180991 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerStarted","Data":"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa"} Feb 17 15:47:41.197900 master-0 kubenswrapper[26425]: I0217 15:47:41.192762 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" event={"ID":"b37674d2-6ddd-4640-9db2-3fded1c3c652","Type":"ContainerStarted","Data":"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28"} Feb 17 15:47:41.197900 master-0 kubenswrapper[26425]: I0217 15:47:41.194757 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:41.197900 master-0 kubenswrapper[26425]: I0217 15:47:41.197230 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerStarted","Data":"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80"} Feb 17 15:47:41.197900 master-0 kubenswrapper[26425]: I0217 15:47:41.197261 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerStarted","Data":"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81"} Feb 17 15:47:41.217524 master-0 kubenswrapper[26425]: I0217 15:47:41.205887 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerStarted","Data":"623fd3bb395489070cfbb337878cd1e15ab3d971bac2e599888eea8b86e983bb"} Feb 17 15:47:41.217524 master-0 kubenswrapper[26425]: I0217 15:47:41.206038 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerStarted","Data":"8fe35b3eb634102949fd6eea1577f16c39b59a613ed4399ec622f4b722788d04"} Feb 17 15:47:41.219366 master-0 kubenswrapper[26425]: I0217 15:47:41.219319 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerStarted","Data":"2e7f0f441893990402b49016be8216fe42e959ddc837aff359c6db6c9e2339ee"} Feb 17 15:47:41.249392 master-0 kubenswrapper[26425]: I0217 15:47:41.241269 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" podStartSLOduration=4.241252201 podStartE2EDuration="4.241252201s" podCreationTimestamp="2026-02-17 15:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:41.234943731 +0000 UTC m=+1923.126667569" watchObservedRunningTime="2026-02-17 15:47:41.241252201 +0000 UTC m=+1923.132976019" Feb 17 15:47:41.281940 master-0 kubenswrapper[26425]: I0217 15:47:41.281822 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" podStartSLOduration=3.283173426 podStartE2EDuration="4.281803874s" podCreationTimestamp="2026-02-17 15:47:37 +0000 UTC" firstStartedPulling="2026-02-17 15:47:39.3438897 +0000 UTC m=+1921.235613518" lastFinishedPulling="2026-02-17 15:47:40.342520138 +0000 UTC m=+1922.234243966" observedRunningTime="2026-02-17 15:47:41.279484739 +0000 UTC m=+1923.171208567" watchObservedRunningTime="2026-02-17 15:47:41.281803874 +0000 UTC m=+1923.173527682" Feb 17 15:47:41.328407 master-0 kubenswrapper[26425]: I0217 15:47:41.328209 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-backup-0" podStartSLOduration=3.165856632 podStartE2EDuration="4.328187186s" podCreationTimestamp="2026-02-17 15:47:37 +0000 UTC" firstStartedPulling="2026-02-17 15:47:39.189415376 +0000 UTC m=+1921.081139194" lastFinishedPulling="2026-02-17 15:47:40.35174593 +0000 UTC m=+1922.243469748" observedRunningTime="2026-02-17 15:47:41.317656844 +0000 UTC m=+1923.209380672" watchObservedRunningTime="2026-02-17 15:47:41.328187186 +0000 UTC m=+1923.219911004" Feb 17 15:47:41.470589 master-0 kubenswrapper[26425]: I0217 15:47:41.470420 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:42.243478 master-0 kubenswrapper[26425]: I0217 15:47:42.242703 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerStarted","Data":"c2ff56abb190b003afcc5432438522be8742becacce667d544d9c257a5c0220b"} Feb 17 15:47:42.252477 master-0 kubenswrapper[26425]: I0217 15:47:42.252241 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerStarted","Data":"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba"} Feb 17 15:47:42.252867 master-0 kubenswrapper[26425]: I0217 15:47:42.252829 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:42.278800 master-0 kubenswrapper[26425]: I0217 15:47:42.278696 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-scheduler-0" podStartSLOduration=4.473778877 podStartE2EDuration="5.27866324s" podCreationTimestamp="2026-02-17 15:47:37 +0000 UTC" firstStartedPulling="2026-02-17 15:47:39.045549535 +0000 UTC m=+1920.937273353" lastFinishedPulling="2026-02-17 15:47:39.850433898 +0000 UTC m=+1921.742157716" observedRunningTime="2026-02-17 15:47:42.277140084 +0000 UTC m=+1924.168863912" watchObservedRunningTime="2026-02-17 15:47:42.27866324 +0000 UTC m=+1924.170387058" Feb 17 15:47:42.341816 master-0 kubenswrapper[26425]: I0217 15:47:42.341693 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-api-0" podStartSLOduration=4.341670021 podStartE2EDuration="4.341670021s" podCreationTimestamp="2026-02-17 15:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:42.30033412 +0000 UTC m=+1924.192057948" watchObservedRunningTime="2026-02-17 15:47:42.341670021 +0000 UTC m=+1924.233393839" Feb 17 15:47:43.138486 master-0 kubenswrapper[26425]: I0217 15:47:43.135878 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:43.160483 master-0 kubenswrapper[26425]: I0217 15:47:43.160300 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:43.268479 master-0 kubenswrapper[26425]: I0217 15:47:43.268050 26425 generic.go:334] "Generic (PLEG): container finished" podID="0f2e8e8e-7b87-4127-b977-62f0c1f29717" containerID="f43308a817f8761f5f0118d50e70bd080cb1118c64446507e6a98ff0d7fe6314" exitCode=0 Feb 17 15:47:43.268479 master-0 kubenswrapper[26425]: I0217 15:47:43.268129 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kr2xk" event={"ID":"0f2e8e8e-7b87-4127-b977-62f0c1f29717","Type":"ContainerDied","Data":"f43308a817f8761f5f0118d50e70bd080cb1118c64446507e6a98ff0d7fe6314"} Feb 17 15:47:43.269095 master-0 kubenswrapper[26425]: I0217 15:47:43.268604 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-api-0" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-04ef3-api-log" containerID="cri-o://352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" gracePeriod=30 Feb 17 15:47:43.269095 master-0 kubenswrapper[26425]: I0217 15:47:43.268667 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-api-0" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-api" containerID="cri-o://92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" gracePeriod=30 Feb 17 15:47:43.325494 master-0 kubenswrapper[26425]: I0217 15:47:43.324633 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:43.974550 master-0 kubenswrapper[26425]: I0217 15:47:43.974506 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045090 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045201 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl22f\" (UniqueName: \"kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045347 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045383 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045474 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045510 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.045611 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id\") pod \"460894d4-6912-4bc2-b15c-434e02cea92b\" (UID: \"460894d4-6912-4bc2-b15c-434e02cea92b\") " Feb 17 15:47:44.046524 master-0 kubenswrapper[26425]: I0217 15:47:44.046435 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:44.047331 master-0 kubenswrapper[26425]: I0217 15:47:44.047043 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs" (OuterVolumeSpecName: "logs") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:47:44.058395 master-0 kubenswrapper[26425]: I0217 15:47:44.058078 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f" (OuterVolumeSpecName: "kube-api-access-fl22f") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "kube-api-access-fl22f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:44.061795 master-0 kubenswrapper[26425]: I0217 15:47:44.061715 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts" (OuterVolumeSpecName: "scripts") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:44.063827 master-0 kubenswrapper[26425]: I0217 15:47:44.063756 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:44.083679 master-0 kubenswrapper[26425]: I0217 15:47:44.083605 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:44.110535 master-0 kubenswrapper[26425]: I0217 15:47:44.110470 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data" (OuterVolumeSpecName: "config-data") pod "460894d4-6912-4bc2-b15c-434e02cea92b" (UID: "460894d4-6912-4bc2-b15c-434e02cea92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148330 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl22f\" (UniqueName: \"kubernetes.io/projected/460894d4-6912-4bc2-b15c-434e02cea92b-kube-api-access-fl22f\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148379 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148388 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148397 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148405 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/460894d4-6912-4bc2-b15c-434e02cea92b-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148414 26425 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/460894d4-6912-4bc2-b15c-434e02cea92b-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.148449 master-0 kubenswrapper[26425]: I0217 15:47:44.148422 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460894d4-6912-4bc2-b15c-434e02cea92b-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:44.291360 master-0 kubenswrapper[26425]: I0217 15:47:44.290227 26425 generic.go:334] "Generic (PLEG): container finished" podID="460894d4-6912-4bc2-b15c-434e02cea92b" containerID="92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" exitCode=0 Feb 17 15:47:44.291360 master-0 kubenswrapper[26425]: I0217 15:47:44.290280 26425 generic.go:334] "Generic (PLEG): container finished" podID="460894d4-6912-4bc2-b15c-434e02cea92b" containerID="352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" exitCode=143 Feb 17 15:47:44.293974 master-0 kubenswrapper[26425]: I0217 15:47:44.291865 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.306565 master-0 kubenswrapper[26425]: I0217 15:47:44.304900 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerDied","Data":"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba"} Feb 17 15:47:44.306565 master-0 kubenswrapper[26425]: I0217 15:47:44.305073 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerDied","Data":"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa"} Feb 17 15:47:44.306565 master-0 kubenswrapper[26425]: I0217 15:47:44.305100 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"460894d4-6912-4bc2-b15c-434e02cea92b","Type":"ContainerDied","Data":"a047f59c2d2664b35777bc6b2a883c0e92be08da23a31ebc678136be4d915110"} Feb 17 15:47:44.306565 master-0 kubenswrapper[26425]: I0217 15:47:44.305165 26425 scope.go:117] "RemoveContainer" containerID="92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" Feb 17 15:47:44.343821 master-0 kubenswrapper[26425]: I0217 15:47:44.343754 26425 scope.go:117] "RemoveContainer" containerID="352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" Feb 17 15:47:44.377897 master-0 kubenswrapper[26425]: I0217 15:47:44.377834 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:44.390816 master-0 kubenswrapper[26425]: I0217 15:47:44.390762 26425 scope.go:117] "RemoveContainer" containerID="92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" Feb 17 15:47:44.391685 master-0 kubenswrapper[26425]: E0217 15:47:44.391627 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba\": container with ID starting with 92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba not found: ID does not exist" containerID="92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" Feb 17 15:47:44.391808 master-0 kubenswrapper[26425]: I0217 15:47:44.391691 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba"} err="failed to get container status \"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba\": rpc error: code = NotFound desc = could not find container \"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba\": container with ID starting with 92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba not found: ID does not exist" Feb 17 15:47:44.391808 master-0 kubenswrapper[26425]: I0217 15:47:44.391721 26425 scope.go:117] "RemoveContainer" containerID="352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" Feb 17 15:47:44.392289 master-0 kubenswrapper[26425]: E0217 15:47:44.392235 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa\": container with ID starting with 352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa not found: ID does not exist" containerID="352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" Feb 17 15:47:44.392368 master-0 kubenswrapper[26425]: I0217 15:47:44.392291 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa"} err="failed to get container status \"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa\": rpc error: code = NotFound desc = could not find container \"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa\": container with ID starting with 352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa not found: ID does not exist" Feb 17 15:47:44.392368 master-0 kubenswrapper[26425]: I0217 15:47:44.392319 26425 scope.go:117] "RemoveContainer" containerID="92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba" Feb 17 15:47:44.392840 master-0 kubenswrapper[26425]: I0217 15:47:44.392809 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba"} err="failed to get container status \"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba\": rpc error: code = NotFound desc = could not find container \"92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba\": container with ID starting with 92508215d77625231c9c7cc07ce5e9b3a166b433fc027a00e126860b5e64fcba not found: ID does not exist" Feb 17 15:47:44.392840 master-0 kubenswrapper[26425]: I0217 15:47:44.392833 26425 scope.go:117] "RemoveContainer" containerID="352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa" Feb 17 15:47:44.393082 master-0 kubenswrapper[26425]: I0217 15:47:44.393051 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa"} err="failed to get container status \"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa\": rpc error: code = NotFound desc = could not find container \"352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa\": container with ID starting with 352742a3a514c4eceb92535d992bc029f12d66718c80f1b5b291f039c4f10caa not found: ID does not exist" Feb 17 15:47:44.427704 master-0 kubenswrapper[26425]: I0217 15:47:44.427569 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:44.427704 master-0 kubenswrapper[26425]: I0217 15:47:44.427627 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: E0217 15:47:44.428070 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-api" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: I0217 15:47:44.428099 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-api" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: E0217 15:47:44.428130 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-04ef3-api-log" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: I0217 15:47:44.428139 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-04ef3-api-log" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: I0217 15:47:44.430560 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-api" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: I0217 15:47:44.430592 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" containerName="cinder-04ef3-api-log" Feb 17 15:47:44.436906 master-0 kubenswrapper[26425]: I0217 15:47:44.432660 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.438541 master-0 kubenswrapper[26425]: I0217 15:47:44.438505 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:44.439289 master-0 kubenswrapper[26425]: I0217 15:47:44.439270 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-api-config-data" Feb 17 15:47:44.439716 master-0 kubenswrapper[26425]: I0217 15:47:44.439698 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 15:47:44.439943 master-0 kubenswrapper[26425]: I0217 15:47:44.439891 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 15:47:44.563353 master-0 kubenswrapper[26425]: I0217 15:47:44.563301 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msmf\" (UniqueName: \"kubernetes.io/projected/bd01535f-6696-4e20-b27d-621ef9c2ed63-kube-api-access-9msmf\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.563701 master-0 kubenswrapper[26425]: I0217 15:47:44.563681 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-public-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.563847 master-0 kubenswrapper[26425]: I0217 15:47:44.563826 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.563981 master-0 kubenswrapper[26425]: I0217 15:47:44.563963 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd01535f-6696-4e20-b27d-621ef9c2ed63-logs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.564170 master-0 kubenswrapper[26425]: I0217 15:47:44.564151 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.564370 master-0 kubenswrapper[26425]: I0217 15:47:44.564351 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.564636 master-0 kubenswrapper[26425]: I0217 15:47:44.564613 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd01535f-6696-4e20-b27d-621ef9c2ed63-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.564766 master-0 kubenswrapper[26425]: I0217 15:47:44.564747 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.564874 master-0 kubenswrapper[26425]: I0217 15:47:44.564856 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-internal-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.667696 master-0 kubenswrapper[26425]: I0217 15:47:44.667637 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msmf\" (UniqueName: \"kubernetes.io/projected/bd01535f-6696-4e20-b27d-621ef9c2ed63-kube-api-access-9msmf\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.667911 master-0 kubenswrapper[26425]: I0217 15:47:44.667712 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-public-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.667911 master-0 kubenswrapper[26425]: I0217 15:47:44.667738 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.667911 master-0 kubenswrapper[26425]: I0217 15:47:44.667773 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd01535f-6696-4e20-b27d-621ef9c2ed63-logs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.667911 master-0 kubenswrapper[26425]: I0217 15:47:44.667839 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.668049 master-0 kubenswrapper[26425]: I0217 15:47:44.667913 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.668049 master-0 kubenswrapper[26425]: I0217 15:47:44.667961 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd01535f-6696-4e20-b27d-621ef9c2ed63-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.668049 master-0 kubenswrapper[26425]: I0217 15:47:44.667985 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.668049 master-0 kubenswrapper[26425]: I0217 15:47:44.668011 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-internal-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.668806 master-0 kubenswrapper[26425]: I0217 15:47:44.668788 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd01535f-6696-4e20-b27d-621ef9c2ed63-logs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.671425 master-0 kubenswrapper[26425]: I0217 15:47:44.671393 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-internal-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.672074 master-0 kubenswrapper[26425]: I0217 15:47:44.672006 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd01535f-6696-4e20-b27d-621ef9c2ed63-etc-machine-id\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.681549 master-0 kubenswrapper[26425]: I0217 15:47:44.674946 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data-custom\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.681549 master-0 kubenswrapper[26425]: I0217 15:47:44.676916 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-public-tls-certs\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.681549 master-0 kubenswrapper[26425]: I0217 15:47:44.680396 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-config-data\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.681549 master-0 kubenswrapper[26425]: I0217 15:47:44.680697 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-combined-ca-bundle\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.681549 master-0 kubenswrapper[26425]: I0217 15:47:44.681226 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd01535f-6696-4e20-b27d-621ef9c2ed63-scripts\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.686230 master-0 kubenswrapper[26425]: I0217 15:47:44.686162 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msmf\" (UniqueName: \"kubernetes.io/projected/bd01535f-6696-4e20-b27d-621ef9c2ed63-kube-api-access-9msmf\") pod \"cinder-04ef3-api-0\" (UID: \"bd01535f-6696-4e20-b27d-621ef9c2ed63\") " pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.770374 master-0 kubenswrapper[26425]: I0217 15:47:44.770219 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:44.923834 master-0 kubenswrapper[26425]: I0217 15:47:44.919834 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:44.975601 master-0 kubenswrapper[26425]: I0217 15:47:44.975533 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config\") pod \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " Feb 17 15:47:44.975871 master-0 kubenswrapper[26425]: I0217 15:47:44.975715 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle\") pod \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " Feb 17 15:47:44.975871 master-0 kubenswrapper[26425]: I0217 15:47:44.975791 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5qd7\" (UniqueName: \"kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7\") pod \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\" (UID: \"0f2e8e8e-7b87-4127-b977-62f0c1f29717\") " Feb 17 15:47:44.979844 master-0 kubenswrapper[26425]: I0217 15:47:44.979807 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7" (OuterVolumeSpecName: "kube-api-access-n5qd7") pod "0f2e8e8e-7b87-4127-b977-62f0c1f29717" (UID: "0f2e8e8e-7b87-4127-b977-62f0c1f29717"). InnerVolumeSpecName "kube-api-access-n5qd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:45.005057 master-0 kubenswrapper[26425]: I0217 15:47:45.004950 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f2e8e8e-7b87-4127-b977-62f0c1f29717" (UID: "0f2e8e8e-7b87-4127-b977-62f0c1f29717"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:45.011700 master-0 kubenswrapper[26425]: I0217 15:47:45.011623 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config" (OuterVolumeSpecName: "config") pod "0f2e8e8e-7b87-4127-b977-62f0c1f29717" (UID: "0f2e8e8e-7b87-4127-b977-62f0c1f29717"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:45.085869 master-0 kubenswrapper[26425]: I0217 15:47:45.085798 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:45.085869 master-0 kubenswrapper[26425]: I0217 15:47:45.085854 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2e8e8e-7b87-4127-b977-62f0c1f29717-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:45.085869 master-0 kubenswrapper[26425]: I0217 15:47:45.085870 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5qd7\" (UniqueName: \"kubernetes.io/projected/0f2e8e8e-7b87-4127-b977-62f0c1f29717-kube-api-access-n5qd7\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:45.266451 master-0 kubenswrapper[26425]: I0217 15:47:45.266409 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-api-0"] Feb 17 15:47:45.270890 master-0 kubenswrapper[26425]: W0217 15:47:45.270838 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd01535f_6696_4e20_b27d_621ef9c2ed63.slice/crio-256924d0888b7ae4960d61263ead2409344f7cda0ad12db82cd1189b4c339780 WatchSource:0}: Error finding container 256924d0888b7ae4960d61263ead2409344f7cda0ad12db82cd1189b4c339780: Status 404 returned error can't find the container with id 256924d0888b7ae4960d61263ead2409344f7cda0ad12db82cd1189b4c339780 Feb 17 15:47:45.323050 master-0 kubenswrapper[26425]: I0217 15:47:45.322953 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kr2xk" event={"ID":"0f2e8e8e-7b87-4127-b977-62f0c1f29717","Type":"ContainerDied","Data":"222265af07a8b6113048150b1d3cad7185d108b805029d3a8ccc3e5518b36b8a"} Feb 17 15:47:45.323050 master-0 kubenswrapper[26425]: I0217 15:47:45.322997 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222265af07a8b6113048150b1d3cad7185d108b805029d3a8ccc3e5518b36b8a" Feb 17 15:47:45.323050 master-0 kubenswrapper[26425]: I0217 15:47:45.323005 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kr2xk" Feb 17 15:47:45.324335 master-0 kubenswrapper[26425]: I0217 15:47:45.324294 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"bd01535f-6696-4e20-b27d-621ef9c2ed63","Type":"ContainerStarted","Data":"256924d0888b7ae4960d61263ead2409344f7cda0ad12db82cd1189b4c339780"} Feb 17 15:47:45.615019 master-0 kubenswrapper[26425]: I0217 15:47:45.610948 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:45.615019 master-0 kubenswrapper[26425]: I0217 15:47:45.611169 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="dnsmasq-dns" containerID="cri-o://41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28" gracePeriod=10 Feb 17 15:47:45.615019 master-0 kubenswrapper[26425]: I0217 15:47:45.612631 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:45.655839 master-0 kubenswrapper[26425]: I0217 15:47:45.654041 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:47:45.655839 master-0 kubenswrapper[26425]: E0217 15:47:45.654628 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2e8e8e-7b87-4127-b977-62f0c1f29717" containerName="neutron-db-sync" Feb 17 15:47:45.655839 master-0 kubenswrapper[26425]: I0217 15:47:45.654642 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2e8e8e-7b87-4127-b977-62f0c1f29717" containerName="neutron-db-sync" Feb 17 15:47:45.655839 master-0 kubenswrapper[26425]: I0217 15:47:45.654870 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2e8e8e-7b87-4127-b977-62f0c1f29717" containerName="neutron-db-sync" Feb 17 15:47:45.656579 master-0 kubenswrapper[26425]: I0217 15:47:45.656163 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.680915 master-0 kubenswrapper[26425]: I0217 15:47:45.676142 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.713512 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.713555 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.713969 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.714033 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgptn\" (UniqueName: \"kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.714074 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.714468 master-0 kubenswrapper[26425]: I0217 15:47:45.714131 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.790680 master-0 kubenswrapper[26425]: I0217 15:47:45.790628 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:47:45.793402 master-0 kubenswrapper[26425]: I0217 15:47:45.792934 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:45.796764 master-0 kubenswrapper[26425]: I0217 15:47:45.796645 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 15:47:45.796895 master-0 kubenswrapper[26425]: I0217 15:47:45.796790 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 15:47:45.797321 master-0 kubenswrapper[26425]: I0217 15:47:45.797264 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 15:47:45.812681 master-0 kubenswrapper[26425]: I0217 15:47:45.812629 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:47:45.818263 master-0 kubenswrapper[26425]: I0217 15:47:45.817974 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgptn\" (UniqueName: \"kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.818263 master-0 kubenswrapper[26425]: I0217 15:47:45.818067 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.818263 master-0 kubenswrapper[26425]: I0217 15:47:45.818138 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.818263 master-0 kubenswrapper[26425]: I0217 15:47:45.818260 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.818951 master-0 kubenswrapper[26425]: I0217 15:47:45.818288 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.818951 master-0 kubenswrapper[26425]: I0217 15:47:45.818356 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.819319 master-0 kubenswrapper[26425]: I0217 15:47:45.819298 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.820241 master-0 kubenswrapper[26425]: I0217 15:47:45.820209 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.821145 master-0 kubenswrapper[26425]: I0217 15:47:45.821058 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.821362 master-0 kubenswrapper[26425]: I0217 15:47:45.821330 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.821494 master-0 kubenswrapper[26425]: I0217 15:47:45.821443 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.840531 master-0 kubenswrapper[26425]: I0217 15:47:45.840006 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgptn\" (UniqueName: \"kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn\") pod \"dnsmasq-dns-c54fb858c-f69kf\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:45.921242 master-0 kubenswrapper[26425]: I0217 15:47:45.921160 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:45.921508 master-0 kubenswrapper[26425]: I0217 15:47:45.921248 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65hq\" (UniqueName: \"kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:45.921508 master-0 kubenswrapper[26425]: I0217 15:47:45.921330 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:45.922053 master-0 kubenswrapper[26425]: I0217 15:47:45.921547 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:45.922053 master-0 kubenswrapper[26425]: I0217 15:47:45.921582 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.025476 master-0 kubenswrapper[26425]: I0217 15:47:46.023653 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.025476 master-0 kubenswrapper[26425]: I0217 15:47:46.023753 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.025476 master-0 kubenswrapper[26425]: I0217 15:47:46.023962 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.025476 master-0 kubenswrapper[26425]: I0217 15:47:46.024047 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65hq\" (UniqueName: \"kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.025476 master-0 kubenswrapper[26425]: I0217 15:47:46.024128 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.034545 master-0 kubenswrapper[26425]: I0217 15:47:46.027757 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.034545 master-0 kubenswrapper[26425]: I0217 15:47:46.029228 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.034545 master-0 kubenswrapper[26425]: I0217 15:47:46.029534 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.034545 master-0 kubenswrapper[26425]: I0217 15:47:46.029907 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.034545 master-0 kubenswrapper[26425]: I0217 15:47:46.033941 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:46.062536 master-0 kubenswrapper[26425]: I0217 15:47:46.052446 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65hq\" (UniqueName: \"kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq\") pod \"neutron-5c5cd8d-bjbtl\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.129350 master-0 kubenswrapper[26425]: I0217 15:47:46.118560 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:46.295142 master-0 kubenswrapper[26425]: I0217 15:47:46.294981 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:46.357546 master-0 kubenswrapper[26425]: I0217 15:47:46.350428 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.357546 master-0 kubenswrapper[26425]: I0217 15:47:46.350599 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5v8g\" (UniqueName: \"kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.357546 master-0 kubenswrapper[26425]: I0217 15:47:46.350620 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.357546 master-0 kubenswrapper[26425]: I0217 15:47:46.350744 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.358289 master-0 kubenswrapper[26425]: I0217 15:47:46.358048 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g" (OuterVolumeSpecName: "kube-api-access-j5v8g") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "kube-api-access-j5v8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:46.445655 master-0 kubenswrapper[26425]: I0217 15:47:46.445598 26425 generic.go:334] "Generic (PLEG): container finished" podID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerID="41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28" exitCode=0 Feb 17 15:47:46.445907 master-0 kubenswrapper[26425]: I0217 15:47:46.445711 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" Feb 17 15:47:46.452519 master-0 kubenswrapper[26425]: I0217 15:47:46.452411 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.452519 master-0 kubenswrapper[26425]: I0217 15:47:46.452521 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0\") pod \"b37674d2-6ddd-4640-9db2-3fded1c3c652\" (UID: \"b37674d2-6ddd-4640-9db2-3fded1c3c652\") " Feb 17 15:47:46.453224 master-0 kubenswrapper[26425]: I0217 15:47:46.453190 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5v8g\" (UniqueName: \"kubernetes.io/projected/b37674d2-6ddd-4640-9db2-3fded1c3c652-kube-api-access-j5v8g\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.463669 master-0 kubenswrapper[26425]: I0217 15:47:46.463599 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460894d4-6912-4bc2-b15c-434e02cea92b" path="/var/lib/kubelet/pods/460894d4-6912-4bc2-b15c-434e02cea92b/volumes" Feb 17 15:47:46.537147 master-0 kubenswrapper[26425]: I0217 15:47:46.537090 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:46.541843 master-0 kubenswrapper[26425]: I0217 15:47:46.540825 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config" (OuterVolumeSpecName: "config") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:46.581344 master-0 kubenswrapper[26425]: I0217 15:47:46.581212 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.581344 master-0 kubenswrapper[26425]: I0217 15:47:46.581266 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.619276 master-0 kubenswrapper[26425]: I0217 15:47:46.619039 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:46.632229 master-0 kubenswrapper[26425]: I0217 15:47:46.632161 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:46.673647 master-0 kubenswrapper[26425]: I0217 15:47:46.671732 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b37674d2-6ddd-4640-9db2-3fded1c3c652" (UID: "b37674d2-6ddd-4640-9db2-3fded1c3c652"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:46.683249 master-0 kubenswrapper[26425]: I0217 15:47:46.683183 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.683249 master-0 kubenswrapper[26425]: I0217 15:47:46.683225 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.683249 master-0 kubenswrapper[26425]: I0217 15:47:46.683236 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37674d2-6ddd-4640-9db2-3fded1c3c652-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:46.693430 master-0 kubenswrapper[26425]: I0217 15:47:46.693366 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" event={"ID":"b37674d2-6ddd-4640-9db2-3fded1c3c652","Type":"ContainerDied","Data":"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28"} Feb 17 15:47:46.693430 master-0 kubenswrapper[26425]: I0217 15:47:46.693431 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-jfb4s" event={"ID":"b37674d2-6ddd-4640-9db2-3fded1c3c652","Type":"ContainerDied","Data":"144519c22059cccd2904370836677a6b57bcad67e30787d0b758f4ea672bb812"} Feb 17 15:47:46.693683 master-0 kubenswrapper[26425]: I0217 15:47:46.693448 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"bd01535f-6696-4e20-b27d-621ef9c2ed63","Type":"ContainerStarted","Data":"802f8f44bc44f6cba5b8646e599484386976d5fab68240394d4679f075da93ba"} Feb 17 15:47:46.693683 master-0 kubenswrapper[26425]: I0217 15:47:46.693490 26425 scope.go:117] "RemoveContainer" containerID="41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28" Feb 17 15:47:46.731442 master-0 kubenswrapper[26425]: I0217 15:47:46.731398 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:47:46.766653 master-0 kubenswrapper[26425]: I0217 15:47:46.766349 26425 scope.go:117] "RemoveContainer" containerID="827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e" Feb 17 15:47:46.818361 master-0 kubenswrapper[26425]: I0217 15:47:46.818199 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:46.832654 master-0 kubenswrapper[26425]: I0217 15:47:46.832586 26425 scope.go:117] "RemoveContainer" containerID="41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28" Feb 17 15:47:46.833319 master-0 kubenswrapper[26425]: E0217 15:47:46.833261 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28\": container with ID starting with 41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28 not found: ID does not exist" containerID="41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28" Feb 17 15:47:46.833397 master-0 kubenswrapper[26425]: I0217 15:47:46.833319 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28"} err="failed to get container status \"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28\": rpc error: code = NotFound desc = could not find container \"41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28\": container with ID starting with 41dd4b910daaf1d71dd2fd4512aecd310366eff6c61cb95c2e81ec7ec0579d28 not found: ID does not exist" Feb 17 15:47:46.833397 master-0 kubenswrapper[26425]: I0217 15:47:46.833349 26425 scope.go:117] "RemoveContainer" containerID="827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e" Feb 17 15:47:46.833795 master-0 kubenswrapper[26425]: E0217 15:47:46.833758 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e\": container with ID starting with 827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e not found: ID does not exist" containerID="827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e" Feb 17 15:47:46.833870 master-0 kubenswrapper[26425]: I0217 15:47:46.833789 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e"} err="failed to get container status \"827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e\": rpc error: code = NotFound desc = could not find container \"827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e\": container with ID starting with 827c376661d5a215f65b58862ded0342bf5c6cd86eaded28c83ba82d09a0633e not found: ID does not exist" Feb 17 15:47:46.837703 master-0 kubenswrapper[26425]: I0217 15:47:46.837643 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-jfb4s"] Feb 17 15:47:47.022898 master-0 kubenswrapper[26425]: I0217 15:47:47.022853 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:47:47.472219 master-0 kubenswrapper[26425]: I0217 15:47:47.472173 26425 generic.go:334] "Generic (PLEG): container finished" podID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerID="17a3cfde13e3ca4b8b118165c8321317a9b2a82de7a1181514529f9adf0bd483" exitCode=0 Feb 17 15:47:47.472659 master-0 kubenswrapper[26425]: I0217 15:47:47.472229 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" event={"ID":"9c0c18df-1767-4810-ad4b-2b954d38e60f","Type":"ContainerDied","Data":"17a3cfde13e3ca4b8b118165c8321317a9b2a82de7a1181514529f9adf0bd483"} Feb 17 15:47:47.472659 master-0 kubenswrapper[26425]: I0217 15:47:47.472258 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" event={"ID":"9c0c18df-1767-4810-ad4b-2b954d38e60f","Type":"ContainerStarted","Data":"d401a20319eb4547778c06586daa723d7a92f941faa216db6235df042cd6a0e4"} Feb 17 15:47:47.475507 master-0 kubenswrapper[26425]: I0217 15:47:47.475447 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerStarted","Data":"fdb8892881461652575531c8f135056b00711a9f7fe6e90bd40559e27cc55139"} Feb 17 15:47:47.475507 master-0 kubenswrapper[26425]: I0217 15:47:47.475507 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerStarted","Data":"08828ce8fed9df20e0faccc333455c987dc3296b24dd920b10dd53ed903a736b"} Feb 17 15:47:47.485509 master-0 kubenswrapper[26425]: I0217 15:47:47.485439 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-api-0" event={"ID":"bd01535f-6696-4e20-b27d-621ef9c2ed63","Type":"ContainerStarted","Data":"6447ca14eeaf58a9bd208b7112844f243cfcf6e28323f9974d85c8d5a53aeaed"} Feb 17 15:47:47.485778 master-0 kubenswrapper[26425]: I0217 15:47:47.485743 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:47.542936 master-0 kubenswrapper[26425]: I0217 15:47:47.542859 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-api-0" podStartSLOduration=3.542839734 podStartE2EDuration="3.542839734s" podCreationTimestamp="2026-02-17 15:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:47.518018838 +0000 UTC m=+1929.409742656" watchObservedRunningTime="2026-02-17 15:47:47.542839734 +0000 UTC m=+1929.434563562" Feb 17 15:47:48.124200 master-0 kubenswrapper[26425]: I0217 15:47:48.124047 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c6d47966f-zhq5k"] Feb 17 15:47:48.124545 master-0 kubenswrapper[26425]: E0217 15:47:48.124514 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="init" Feb 17 15:47:48.124545 master-0 kubenswrapper[26425]: I0217 15:47:48.124529 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="init" Feb 17 15:47:48.124680 master-0 kubenswrapper[26425]: E0217 15:47:48.124561 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="dnsmasq-dns" Feb 17 15:47:48.124680 master-0 kubenswrapper[26425]: I0217 15:47:48.124569 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="dnsmasq-dns" Feb 17 15:47:48.124805 master-0 kubenswrapper[26425]: I0217 15:47:48.124777 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" containerName="dnsmasq-dns" Feb 17 15:47:48.127039 master-0 kubenswrapper[26425]: I0217 15:47:48.126078 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.130527 master-0 kubenswrapper[26425]: I0217 15:47:48.130484 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 15:47:48.133436 master-0 kubenswrapper[26425]: I0217 15:47:48.131244 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 15:47:48.180491 master-0 kubenswrapper[26425]: I0217 15:47:48.178550 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6d47966f-zhq5k"] Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225655 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-internal-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225724 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-combined-ca-bundle\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225757 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-httpd-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225832 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-public-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225916 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmg9\" (UniqueName: \"kubernetes.io/projected/1e3ee715-3789-41be-9f2c-4de7d1342965-kube-api-access-vdmg9\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225943 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.226081 master-0 kubenswrapper[26425]: I0217 15:47:48.225977 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-ovndb-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.327777 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-ovndb-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.327854 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-internal-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.327900 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-combined-ca-bundle\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.327929 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-httpd-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.328000 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-public-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.328088 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdmg9\" (UniqueName: \"kubernetes.io/projected/1e3ee715-3789-41be-9f2c-4de7d1342965-kube-api-access-vdmg9\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.328238 master-0 kubenswrapper[26425]: I0217 15:47:48.328116 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.333089 master-0 kubenswrapper[26425]: I0217 15:47:48.332791 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-combined-ca-bundle\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.335373 master-0 kubenswrapper[26425]: I0217 15:47:48.335329 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.336603 master-0 kubenswrapper[26425]: I0217 15:47:48.336579 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-ovndb-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.339428 master-0 kubenswrapper[26425]: I0217 15:47:48.339370 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-internal-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.340071 master-0 kubenswrapper[26425]: I0217 15:47:48.340040 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-httpd-config\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.357921 master-0 kubenswrapper[26425]: I0217 15:47:48.357792 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e3ee715-3789-41be-9f2c-4de7d1342965-public-tls-certs\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.361294 master-0 kubenswrapper[26425]: I0217 15:47:48.361254 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdmg9\" (UniqueName: \"kubernetes.io/projected/1e3ee715-3789-41be-9f2c-4de7d1342965-kube-api-access-vdmg9\") pod \"neutron-7c6d47966f-zhq5k\" (UID: \"1e3ee715-3789-41be-9f2c-4de7d1342965\") " pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.433611 master-0 kubenswrapper[26425]: I0217 15:47:48.433549 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37674d2-6ddd-4640-9db2-3fded1c3c652" path="/var/lib/kubelet/pods/b37674d2-6ddd-4640-9db2-3fded1c3c652/volumes" Feb 17 15:47:48.472395 master-0 kubenswrapper[26425]: I0217 15:47:48.472170 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:48.476077 master-0 kubenswrapper[26425]: I0217 15:47:48.476019 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:48.483352 master-0 kubenswrapper[26425]: I0217 15:47:48.483292 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:48.536783 master-0 kubenswrapper[26425]: I0217 15:47:48.536704 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" event={"ID":"9c0c18df-1767-4810-ad4b-2b954d38e60f","Type":"ContainerStarted","Data":"08570b0bb72d0354c2b7168c6cfb3cce5c6760ce82ca47228797273c2f9275a6"} Feb 17 15:47:48.537246 master-0 kubenswrapper[26425]: I0217 15:47:48.537198 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:48.549343 master-0 kubenswrapper[26425]: I0217 15:47:48.549238 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerStarted","Data":"e5d754197dae0faa508e306cc685012aea92994c0e4e9de5c979da8062894785"} Feb 17 15:47:48.549608 master-0 kubenswrapper[26425]: I0217 15:47:48.549400 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:47:48.576171 master-0 kubenswrapper[26425]: I0217 15:47:48.572584 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:48.576171 master-0 kubenswrapper[26425]: I0217 15:47:48.572884 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-backup-0" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="cinder-backup" containerID="cri-o://ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81" gracePeriod=30 Feb 17 15:47:48.576171 master-0 kubenswrapper[26425]: I0217 15:47:48.573021 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-backup-0" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="probe" containerID="cri-o://a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80" gracePeriod=30 Feb 17 15:47:48.637887 master-0 kubenswrapper[26425]: I0217 15:47:48.632006 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:48.637887 master-0 kubenswrapper[26425]: I0217 15:47:48.632597 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-scheduler-0" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="cinder-scheduler" containerID="cri-o://2e7f0f441893990402b49016be8216fe42e959ddc837aff359c6db6c9e2339ee" gracePeriod=30 Feb 17 15:47:48.637887 master-0 kubenswrapper[26425]: I0217 15:47:48.632634 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-scheduler-0" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="probe" containerID="cri-o://c2ff56abb190b003afcc5432438522be8742becacce667d544d9c257a5c0220b" gracePeriod=30 Feb 17 15:47:48.655762 master-0 kubenswrapper[26425]: I0217 15:47:48.655710 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:48.687934 master-0 kubenswrapper[26425]: I0217 15:47:48.681890 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" podStartSLOduration=3.681863259 podStartE2EDuration="3.681863259s" podCreationTimestamp="2026-02-17 15:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:48.580430217 +0000 UTC m=+1930.472154035" watchObservedRunningTime="2026-02-17 15:47:48.681863259 +0000 UTC m=+1930.573587077" Feb 17 15:47:48.743909 master-0 kubenswrapper[26425]: I0217 15:47:48.743826 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c5cd8d-bjbtl" podStartSLOduration=3.743802625 podStartE2EDuration="3.743802625s" podCreationTimestamp="2026-02-17 15:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:48.617787423 +0000 UTC m=+1930.509511261" watchObservedRunningTime="2026-02-17 15:47:48.743802625 +0000 UTC m=+1930.635526443" Feb 17 15:47:48.791271 master-0 kubenswrapper[26425]: I0217 15:47:48.791201 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:49.174589 master-0 kubenswrapper[26425]: W0217 15:47:49.174531 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e3ee715_3789_41be_9f2c_4de7d1342965.slice/crio-a557421dd09f9467b8934962c70673974124b930fae77d302e3c5b75327fa58d WatchSource:0}: Error finding container a557421dd09f9467b8934962c70673974124b930fae77d302e3c5b75327fa58d: Status 404 returned error can't find the container with id a557421dd09f9467b8934962c70673974124b930fae77d302e3c5b75327fa58d Feb 17 15:47:49.181223 master-0 kubenswrapper[26425]: I0217 15:47:49.181167 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6d47966f-zhq5k"] Feb 17 15:47:49.557519 master-0 kubenswrapper[26425]: I0217 15:47:49.557434 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6d47966f-zhq5k" event={"ID":"1e3ee715-3789-41be-9f2c-4de7d1342965","Type":"ContainerStarted","Data":"615bc08f7d59b456d5d427f2004731db8ef6ea7cded06019871a0b3f8592feb4"} Feb 17 15:47:49.557519 master-0 kubenswrapper[26425]: I0217 15:47:49.557525 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6d47966f-zhq5k" event={"ID":"1e3ee715-3789-41be-9f2c-4de7d1342965","Type":"ContainerStarted","Data":"a557421dd09f9467b8934962c70673974124b930fae77d302e3c5b75327fa58d"} Feb 17 15:47:49.563397 master-0 kubenswrapper[26425]: I0217 15:47:49.563336 26425 generic.go:334] "Generic (PLEG): container finished" podID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerID="a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80" exitCode=0 Feb 17 15:47:49.563525 master-0 kubenswrapper[26425]: I0217 15:47:49.563480 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerDied","Data":"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80"} Feb 17 15:47:49.567471 master-0 kubenswrapper[26425]: I0217 15:47:49.566551 26425 generic.go:334] "Generic (PLEG): container finished" podID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerID="c2ff56abb190b003afcc5432438522be8742becacce667d544d9c257a5c0220b" exitCode=0 Feb 17 15:47:49.567471 master-0 kubenswrapper[26425]: I0217 15:47:49.566587 26425 generic.go:334] "Generic (PLEG): container finished" podID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerID="2e7f0f441893990402b49016be8216fe42e959ddc837aff359c6db6c9e2339ee" exitCode=0 Feb 17 15:47:49.567471 master-0 kubenswrapper[26425]: I0217 15:47:49.567374 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerDied","Data":"c2ff56abb190b003afcc5432438522be8742becacce667d544d9c257a5c0220b"} Feb 17 15:47:49.567471 master-0 kubenswrapper[26425]: I0217 15:47:49.567448 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerDied","Data":"2e7f0f441893990402b49016be8216fe42e959ddc837aff359c6db6c9e2339ee"} Feb 17 15:47:49.568167 master-0 kubenswrapper[26425]: I0217 15:47:49.567768 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="cinder-volume" containerID="cri-o://8fe35b3eb634102949fd6eea1577f16c39b59a613ed4399ec622f4b722788d04" gracePeriod=30 Feb 17 15:47:49.568272 master-0 kubenswrapper[26425]: I0217 15:47:49.568117 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="probe" containerID="cri-o://623fd3bb395489070cfbb337878cd1e15ab3d971bac2e599888eea8b86e983bb" gracePeriod=30 Feb 17 15:47:49.917697 master-0 kubenswrapper[26425]: I0217 15:47:49.916591 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.014075 master-0 kubenswrapper[26425]: I0217 15:47:50.013694 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd2jv\" (UniqueName: \"kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.014075 master-0 kubenswrapper[26425]: I0217 15:47:50.013833 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.014075 master-0 kubenswrapper[26425]: I0217 15:47:50.014051 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.014342 master-0 kubenswrapper[26425]: I0217 15:47:50.014110 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.014342 master-0 kubenswrapper[26425]: I0217 15:47:50.014192 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.014342 master-0 kubenswrapper[26425]: I0217 15:47:50.014236 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data\") pod \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\" (UID: \"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8\") " Feb 17 15:47:50.016633 master-0 kubenswrapper[26425]: I0217 15:47:50.015473 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.016633 master-0 kubenswrapper[26425]: I0217 15:47:50.016161 26425 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.019712 master-0 kubenswrapper[26425]: I0217 15:47:50.019571 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts" (OuterVolumeSpecName: "scripts") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.020364 master-0 kubenswrapper[26425]: I0217 15:47:50.020286 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.022019 master-0 kubenswrapper[26425]: I0217 15:47:50.021959 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv" (OuterVolumeSpecName: "kube-api-access-wd2jv") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "kube-api-access-wd2jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:50.120517 master-0 kubenswrapper[26425]: I0217 15:47:50.118049 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.120517 master-0 kubenswrapper[26425]: I0217 15:47:50.118108 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.120517 master-0 kubenswrapper[26425]: I0217 15:47:50.118119 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd2jv\" (UniqueName: \"kubernetes.io/projected/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-kube-api-access-wd2jv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.164493 master-0 kubenswrapper[26425]: I0217 15:47:50.162833 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.221503 master-0 kubenswrapper[26425]: I0217 15:47:50.219940 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.307542 master-0 kubenswrapper[26425]: I0217 15:47:50.307475 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data" (OuterVolumeSpecName: "config-data") pod "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" (UID: "a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.308420 master-0 kubenswrapper[26425]: I0217 15:47:50.308370 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.321873 master-0 kubenswrapper[26425]: I0217 15:47:50.321624 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.446666 master-0 kubenswrapper[26425]: I0217 15:47:50.446618 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.446818 master-0 kubenswrapper[26425]: I0217 15:47:50.446687 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.446818 master-0 kubenswrapper[26425]: I0217 15:47:50.446791 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.446910 master-0 kubenswrapper[26425]: I0217 15:47:50.446893 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.446950 master-0 kubenswrapper[26425]: I0217 15:47:50.446917 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.446984 master-0 kubenswrapper[26425]: I0217 15:47:50.446934 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447014 master-0 kubenswrapper[26425]: I0217 15:47:50.447005 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447050 master-0 kubenswrapper[26425]: I0217 15:47:50.447025 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447050 master-0 kubenswrapper[26425]: I0217 15:47:50.447045 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447113 master-0 kubenswrapper[26425]: I0217 15:47:50.447072 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5sk\" (UniqueName: \"kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447113 master-0 kubenswrapper[26425]: I0217 15:47:50.447088 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447174 master-0 kubenswrapper[26425]: I0217 15:47:50.447114 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447174 master-0 kubenswrapper[26425]: I0217 15:47:50.447168 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447239 master-0 kubenswrapper[26425]: I0217 15:47:50.447196 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.447272 master-0 kubenswrapper[26425]: I0217 15:47:50.447238 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder\") pod \"889ad0c0-9053-4c32-8dbf-17e35278ca01\" (UID: \"889ad0c0-9053-4c32-8dbf-17e35278ca01\") " Feb 17 15:47:50.448152 master-0 kubenswrapper[26425]: I0217 15:47:50.448126 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448206 master-0 kubenswrapper[26425]: I0217 15:47:50.448174 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448617 master-0 kubenswrapper[26425]: I0217 15:47:50.448567 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448709 master-0 kubenswrapper[26425]: I0217 15:47:50.448668 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448762 master-0 kubenswrapper[26425]: I0217 15:47:50.448729 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448796 master-0 kubenswrapper[26425]: I0217 15:47:50.448767 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run" (OuterVolumeSpecName: "run") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448999 master-0 kubenswrapper[26425]: I0217 15:47:50.448754 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448999 master-0 kubenswrapper[26425]: I0217 15:47:50.448796 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys" (OuterVolumeSpecName: "sys") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.448999 master-0 kubenswrapper[26425]: I0217 15:47:50.448853 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev" (OuterVolumeSpecName: "dev") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.449115 master-0 kubenswrapper[26425]: I0217 15:47:50.448940 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:50.451172 master-0 kubenswrapper[26425]: I0217 15:47:50.451131 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts" (OuterVolumeSpecName: "scripts") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.453064 master-0 kubenswrapper[26425]: I0217 15:47:50.452980 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk" (OuterVolumeSpecName: "kube-api-access-fz5sk") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "kube-api-access-fz5sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:50.464722 master-0 kubenswrapper[26425]: I0217 15:47:50.464656 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.502499 master-0 kubenswrapper[26425]: I0217 15:47:50.502420 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550701 26425 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550736 26425 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550746 26425 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550756 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550765 26425 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550774 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550783 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5sk\" (UniqueName: \"kubernetes.io/projected/889ad0c0-9053-4c32-8dbf-17e35278ca01-kube-api-access-fz5sk\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550792 26425 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550800 26425 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550815 26425 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550822 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550831 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.550828 master-0 kubenswrapper[26425]: I0217 15:47:50.550841 26425 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-sys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.551917 master-0 kubenswrapper[26425]: I0217 15:47:50.550850 26425 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/889ad0c0-9053-4c32-8dbf-17e35278ca01-dev\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.602583 master-0 kubenswrapper[26425]: I0217 15:47:50.598899 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data" (OuterVolumeSpecName: "config-data") pod "889ad0c0-9053-4c32-8dbf-17e35278ca01" (UID: "889ad0c0-9053-4c32-8dbf-17e35278ca01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:50.605487 master-0 kubenswrapper[26425]: I0217 15:47:50.605423 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.611542 master-0 kubenswrapper[26425]: I0217 15:47:50.611494 26425 generic.go:334] "Generic (PLEG): container finished" podID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerID="ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81" exitCode=0 Feb 17 15:47:50.611656 master-0 kubenswrapper[26425]: I0217 15:47:50.611581 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.619532 master-0 kubenswrapper[26425]: I0217 15:47:50.619433 26425 generic.go:334] "Generic (PLEG): container finished" podID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerID="623fd3bb395489070cfbb337878cd1e15ab3d971bac2e599888eea8b86e983bb" exitCode=0 Feb 17 15:47:50.619532 master-0 kubenswrapper[26425]: I0217 15:47:50.619506 26425 generic.go:334] "Generic (PLEG): container finished" podID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerID="8fe35b3eb634102949fd6eea1577f16c39b59a613ed4399ec622f4b722788d04" exitCode=0 Feb 17 15:47:50.653025 master-0 kubenswrapper[26425]: I0217 15:47:50.652971 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ad0c0-9053-4c32-8dbf-17e35278ca01-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:50.673717 master-0 kubenswrapper[26425]: I0217 15:47:50.673657 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:47:50.674231 master-0 kubenswrapper[26425]: I0217 15:47:50.673774 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8","Type":"ContainerDied","Data":"0079379ddf3926f5bb5c7a21f27f4cd75ee81dfa39b7c4abbf85091cff4f5c3d"} Feb 17 15:47:50.674346 master-0 kubenswrapper[26425]: I0217 15:47:50.674312 26425 scope.go:117] "RemoveContainer" containerID="c2ff56abb190b003afcc5432438522be8742becacce667d544d9c257a5c0220b" Feb 17 15:47:50.674809 master-0 kubenswrapper[26425]: I0217 15:47:50.674727 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6d47966f-zhq5k" event={"ID":"1e3ee715-3789-41be-9f2c-4de7d1342965","Type":"ContainerStarted","Data":"5f0ff2c3b5eb822cf1805826fcf226cd23e7c4019dc925dc5753adb2f38e93da"} Feb 17 15:47:50.674884 master-0 kubenswrapper[26425]: I0217 15:47:50.674813 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerDied","Data":"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81"} Feb 17 15:47:50.674953 master-0 kubenswrapper[26425]: I0217 15:47:50.674885 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"889ad0c0-9053-4c32-8dbf-17e35278ca01","Type":"ContainerDied","Data":"484cacdea4958be8eb839dfd2afb704c0a236af63e893b06193abb6982b9961a"} Feb 17 15:47:50.674953 master-0 kubenswrapper[26425]: I0217 15:47:50.674912 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerDied","Data":"623fd3bb395489070cfbb337878cd1e15ab3d971bac2e599888eea8b86e983bb"} Feb 17 15:47:50.675036 master-0 kubenswrapper[26425]: I0217 15:47:50.674980 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerDied","Data":"8fe35b3eb634102949fd6eea1577f16c39b59a613ed4399ec622f4b722788d04"} Feb 17 15:47:50.700300 master-0 kubenswrapper[26425]: I0217 15:47:50.700232 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c6d47966f-zhq5k" podStartSLOduration=2.700214402 podStartE2EDuration="2.700214402s" podCreationTimestamp="2026-02-17 15:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:50.648839 +0000 UTC m=+1932.540562828" watchObservedRunningTime="2026-02-17 15:47:50.700214402 +0000 UTC m=+1932.591938220" Feb 17 15:47:50.711701 master-0 kubenswrapper[26425]: I0217 15:47:50.711620 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:50.737547 master-0 kubenswrapper[26425]: I0217 15:47:50.737357 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:50.756752 master-0 kubenswrapper[26425]: I0217 15:47:50.739812 26425 scope.go:117] "RemoveContainer" containerID="2e7f0f441893990402b49016be8216fe42e959ddc837aff359c6db6c9e2339ee" Feb 17 15:47:50.790805 master-0 kubenswrapper[26425]: I0217 15:47:50.790675 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:50.816194 master-0 kubenswrapper[26425]: I0217 15:47:50.815867 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: E0217 15:47:50.816430 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="cinder-backup" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816451 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="cinder-backup" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: E0217 15:47:50.816506 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="cinder-scheduler" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816513 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="cinder-scheduler" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: E0217 15:47:50.816543 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816550 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: E0217 15:47:50.816562 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816569 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816780 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="cinder-scheduler" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816801 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816814 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="cinder-backup" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.816832 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" containerName="probe" Feb 17 15:47:50.823331 master-0 kubenswrapper[26425]: I0217 15:47:50.820120 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.836063 master-0 kubenswrapper[26425]: I0217 15:47:50.835988 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-backup-config-data" Feb 17 15:47:50.859067 master-0 kubenswrapper[26425]: I0217 15:47:50.857587 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:50.875486 master-0 kubenswrapper[26425]: I0217 15:47:50.863673 26425 scope.go:117] "RemoveContainer" containerID="a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80" Feb 17 15:47:50.875486 master-0 kubenswrapper[26425]: I0217 15:47:50.870891 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:50.880579 master-0 kubenswrapper[26425]: I0217 15:47:50.880524 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:50.883299 master-0 kubenswrapper[26425]: I0217 15:47:50.883267 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.888227 master-0 kubenswrapper[26425]: I0217 15:47:50.888201 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:50.894401 master-0 kubenswrapper[26425]: I0217 15:47:50.894361 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-scheduler-config-data" Feb 17 15:47:50.905082 master-0 kubenswrapper[26425]: I0217 15:47:50.905046 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:50.926786 master-0 kubenswrapper[26425]: I0217 15:47:50.926729 26425 scope.go:117] "RemoveContainer" containerID="ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81" Feb 17 15:47:50.961790 master-0 kubenswrapper[26425]: I0217 15:47:50.961636 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.961790 master-0 kubenswrapper[26425]: I0217 15:47:50.961710 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.961790 master-0 kubenswrapper[26425]: I0217 15:47:50.961752 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962094 master-0 kubenswrapper[26425]: I0217 15:47:50.961952 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pznp\" (UniqueName: \"kubernetes.io/projected/db5293ec-1a53-45ee-aa5d-24508500f0e5-kube-api-access-8pznp\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962094 master-0 kubenswrapper[26425]: I0217 15:47:50.961998 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962094 master-0 kubenswrapper[26425]: I0217 15:47:50.962032 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5293ec-1a53-45ee-aa5d-24508500f0e5-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962233 master-0 kubenswrapper[26425]: I0217 15:47:50.962101 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962233 master-0 kubenswrapper[26425]: I0217 15:47:50.962128 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962434 master-0 kubenswrapper[26425]: I0217 15:47:50.962313 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962512 master-0 kubenswrapper[26425]: I0217 15:47:50.962443 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962512 master-0 kubenswrapper[26425]: I0217 15:47:50.962485 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962585 master-0 kubenswrapper[26425]: I0217 15:47:50.962525 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962680 master-0 kubenswrapper[26425]: I0217 15:47:50.962619 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.962680 master-0 kubenswrapper[26425]: I0217 15:47:50.962642 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962842 master-0 kubenswrapper[26425]: I0217 15:47:50.962805 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:50.962907 master-0 kubenswrapper[26425]: I0217 15:47:50.962887 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.963035 master-0 kubenswrapper[26425]: I0217 15:47:50.962974 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.963035 master-0 kubenswrapper[26425]: I0217 15:47:50.963034 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfct\" (UniqueName: \"kubernetes.io/projected/4a92c319-f3b7-42b0-a51e-e986684d811b-kube-api-access-nqfct\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.963132 master-0 kubenswrapper[26425]: I0217 15:47:50.963066 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-run\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.963181 master-0 kubenswrapper[26425]: I0217 15:47:50.963155 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:50.963236 master-0 kubenswrapper[26425]: I0217 15:47:50.963221 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.034573 master-0 kubenswrapper[26425]: I0217 15:47:51.034531 26425 scope.go:117] "RemoveContainer" containerID="a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80" Feb 17 15:47:51.035090 master-0 kubenswrapper[26425]: E0217 15:47:51.035042 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80\": container with ID starting with a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80 not found: ID does not exist" containerID="a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80" Feb 17 15:47:51.035147 master-0 kubenswrapper[26425]: I0217 15:47:51.035103 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80"} err="failed to get container status \"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80\": rpc error: code = NotFound desc = could not find container \"a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80\": container with ID starting with a9b147c86354c056909c51c2bdb69b06b5acfb906faf17de1b75b1e05ccd9b80 not found: ID does not exist" Feb 17 15:47:51.035147 master-0 kubenswrapper[26425]: I0217 15:47:51.035139 26425 scope.go:117] "RemoveContainer" containerID="ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81" Feb 17 15:47:51.035867 master-0 kubenswrapper[26425]: E0217 15:47:51.035793 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81\": container with ID starting with ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81 not found: ID does not exist" containerID="ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81" Feb 17 15:47:51.035934 master-0 kubenswrapper[26425]: I0217 15:47:51.035876 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81"} err="failed to get container status \"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81\": rpc error: code = NotFound desc = could not find container \"ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81\": container with ID starting with ef27b9c1b644c91896468ac3dfea7f1676c96e83f1bbb751abea03a18f378c81 not found: ID does not exist" Feb 17 15:47:51.064313 master-0 kubenswrapper[26425]: I0217 15:47:51.064259 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.064419 master-0 kubenswrapper[26425]: I0217 15:47:51.064141 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064639 master-0 kubenswrapper[26425]: I0217 15:47:51.064604 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064771 master-0 kubenswrapper[26425]: I0217 15:47:51.064668 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sv84\" (UniqueName: \"kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064821 master-0 kubenswrapper[26425]: I0217 15:47:51.064771 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064866 master-0 kubenswrapper[26425]: I0217 15:47:51.064818 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064866 master-0 kubenswrapper[26425]: I0217 15:47:51.064851 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.064976 master-0 kubenswrapper[26425]: I0217 15:47:51.064896 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065235 master-0 kubenswrapper[26425]: I0217 15:47:51.065201 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065318 master-0 kubenswrapper[26425]: I0217 15:47:51.065299 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065355 master-0 kubenswrapper[26425]: I0217 15:47:51.065331 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065389 master-0 kubenswrapper[26425]: I0217 15:47:51.065375 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065443 master-0 kubenswrapper[26425]: I0217 15:47:51.065425 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065516 master-0 kubenswrapper[26425]: I0217 15:47:51.065486 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065554 master-0 kubenswrapper[26425]: I0217 15:47:51.065518 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065593 master-0 kubenswrapper[26425]: I0217 15:47:51.065564 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick\") pod \"67a705f4-efff-4bbb-8609-7c418e5d83f6\" (UID: \"67a705f4-efff-4bbb-8609-7c418e5d83f6\") " Feb 17 15:47:51.065935 master-0 kubenswrapper[26425]: I0217 15:47:51.065900 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.065982 master-0 kubenswrapper[26425]: I0217 15:47:51.065958 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.066017 master-0 kubenswrapper[26425]: I0217 15:47:51.065984 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.066050 master-0 kubenswrapper[26425]: I0217 15:47:51.066014 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.066050 master-0 kubenswrapper[26425]: I0217 15:47:51.066044 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.066117 master-0 kubenswrapper[26425]: I0217 15:47:51.066040 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.066174 master-0 kubenswrapper[26425]: I0217 15:47:51.066138 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev" (OuterVolumeSpecName: "dev") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.066247 master-0 kubenswrapper[26425]: I0217 15:47:51.066186 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.066507 master-0 kubenswrapper[26425]: I0217 15:47:51.066473 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.066557 master-0 kubenswrapper[26425]: I0217 15:47:51.066532 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys" (OuterVolumeSpecName: "sys") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.066610 master-0 kubenswrapper[26425]: I0217 15:47:51.066552 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-iscsi\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.067106 master-0 kubenswrapper[26425]: I0217 15:47:51.067060 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.067106 master-0 kubenswrapper[26425]: I0217 15:47:51.067101 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.067217 master-0 kubenswrapper[26425]: I0217 15:47:51.067164 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.067217 master-0 kubenswrapper[26425]: I0217 15:47:51.067198 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.067345 master-0 kubenswrapper[26425]: I0217 15:47:51.067273 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run" (OuterVolumeSpecName: "run") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:47:51.067402 master-0 kubenswrapper[26425]: I0217 15:47:51.067297 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pznp\" (UniqueName: \"kubernetes.io/projected/db5293ec-1a53-45ee-aa5d-24508500f0e5-kube-api-access-8pznp\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.067579 master-0 kubenswrapper[26425]: I0217 15:47:51.067509 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5293ec-1a53-45ee-aa5d-24508500f0e5-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.067694 master-0 kubenswrapper[26425]: I0217 15:47:51.067655 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.067756 master-0 kubenswrapper[26425]: I0217 15:47:51.067716 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.067799 master-0 kubenswrapper[26425]: I0217 15:47:51.067781 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.067904 master-0 kubenswrapper[26425]: I0217 15:47:51.067865 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5293ec-1a53-45ee-aa5d-24508500f0e5-etc-machine-id\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.068001 master-0 kubenswrapper[26425]: I0217 15:47:51.067963 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068039 master-0 kubenswrapper[26425]: I0217 15:47:51.068022 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068114 master-0 kubenswrapper[26425]: I0217 15:47:51.068094 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068278 master-0 kubenswrapper[26425]: I0217 15:47:51.068246 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068337 master-0 kubenswrapper[26425]: I0217 15:47:51.068287 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.068386 master-0 kubenswrapper[26425]: I0217 15:47:51.068342 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.068429 master-0 kubenswrapper[26425]: I0217 15:47:51.068389 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068550 master-0 kubenswrapper[26425]: I0217 15:47:51.068521 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068662 master-0 kubenswrapper[26425]: I0217 15:47:51.068628 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfct\" (UniqueName: \"kubernetes.io/projected/4a92c319-f3b7-42b0-a51e-e986684d811b-kube-api-access-nqfct\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068739 master-0 kubenswrapper[26425]: I0217 15:47:51.068707 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-run\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068881 master-0 kubenswrapper[26425]: I0217 15:47:51.068854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-machine-id\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068974 master-0 kubenswrapper[26425]: I0217 15:47:51.068858 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.068974 master-0 kubenswrapper[26425]: I0217 15:47:51.068913 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-dev\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069046 master-0 kubenswrapper[26425]: I0217 15:47:51.068973 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-etc-nvme\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069046 master-0 kubenswrapper[26425]: I0217 15:47:51.069016 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-lib-modules\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069202 master-0 kubenswrapper[26425]: I0217 15:47:51.069101 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-locks-brick\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069263 master-0 kubenswrapper[26425]: I0217 15:47:51.069201 26425 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069340 26425 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069377 26425 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-dev\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069391 26425 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069407 26425 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-sys\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069419 26425 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069434 26425 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069450 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069479 26425 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069495 26425 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67a705f4-efff-4bbb-8609-7c418e5d83f6-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069548 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-run\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069594 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-sys\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.069624 master-0 kubenswrapper[26425]: I0217 15:47:51.069608 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4a92c319-f3b7-42b0-a51e-e986684d811b-var-lib-cinder\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.072149 master-0 kubenswrapper[26425]: I0217 15:47:51.072105 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-combined-ca-bundle\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.072319 master-0 kubenswrapper[26425]: I0217 15:47:51.072276 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-scripts\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.072617 master-0 kubenswrapper[26425]: I0217 15:47:51.072590 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.073203 master-0 kubenswrapper[26425]: I0217 15:47:51.073134 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-scripts\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.080182 master-0 kubenswrapper[26425]: I0217 15:47:51.079628 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts" (OuterVolumeSpecName: "scripts") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:51.081645 master-0 kubenswrapper[26425]: I0217 15:47:51.080651 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.083484 master-0 kubenswrapper[26425]: I0217 15:47:51.082952 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-combined-ca-bundle\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.085066 master-0 kubenswrapper[26425]: I0217 15:47:51.085029 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a92c319-f3b7-42b0-a51e-e986684d811b-config-data-custom\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.088554 master-0 kubenswrapper[26425]: I0217 15:47:51.088486 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84" (OuterVolumeSpecName: "kube-api-access-9sv84") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "kube-api-access-9sv84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:51.088637 master-0 kubenswrapper[26425]: I0217 15:47:51.088564 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pznp\" (UniqueName: \"kubernetes.io/projected/db5293ec-1a53-45ee-aa5d-24508500f0e5-kube-api-access-8pznp\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.088760 master-0 kubenswrapper[26425]: I0217 15:47:51.088715 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5293ec-1a53-45ee-aa5d-24508500f0e5-config-data-custom\") pod \"cinder-04ef3-scheduler-0\" (UID: \"db5293ec-1a53-45ee-aa5d-24508500f0e5\") " pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.089519 master-0 kubenswrapper[26425]: I0217 15:47:51.089404 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:51.089863 master-0 kubenswrapper[26425]: I0217 15:47:51.089822 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfct\" (UniqueName: \"kubernetes.io/projected/4a92c319-f3b7-42b0-a51e-e986684d811b-kube-api-access-nqfct\") pod \"cinder-04ef3-backup-0\" (UID: \"4a92c319-f3b7-42b0-a51e-e986684d811b\") " pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.146345 master-0 kubenswrapper[26425]: I0217 15:47:51.146289 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:51.151650 master-0 kubenswrapper[26425]: I0217 15:47:51.149625 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:51.172375 master-0 kubenswrapper[26425]: I0217 15:47:51.172297 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.172375 master-0 kubenswrapper[26425]: I0217 15:47:51.172366 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sv84\" (UniqueName: \"kubernetes.io/projected/67a705f4-efff-4bbb-8609-7c418e5d83f6-kube-api-access-9sv84\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.172639 master-0 kubenswrapper[26425]: I0217 15:47:51.172440 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.172639 master-0 kubenswrapper[26425]: I0217 15:47:51.172479 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.206998 master-0 kubenswrapper[26425]: I0217 15:47:51.206874 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:51.264504 master-0 kubenswrapper[26425]: I0217 15:47:51.261240 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data" (OuterVolumeSpecName: "config-data") pod "67a705f4-efff-4bbb-8609-7c418e5d83f6" (UID: "67a705f4-efff-4bbb-8609-7c418e5d83f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:51.275675 master-0 kubenswrapper[26425]: I0217 15:47:51.275005 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a705f4-efff-4bbb-8609-7c418e5d83f6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:51.642493 master-0 kubenswrapper[26425]: I0217 15:47:51.637808 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"67a705f4-efff-4bbb-8609-7c418e5d83f6","Type":"ContainerDied","Data":"b42b47c4bbb3bb521e57452f08a9e6d8a2d2f8f3caa8664689ed9a5030f89443"} Feb 17 15:47:51.642493 master-0 kubenswrapper[26425]: I0217 15:47:51.637883 26425 scope.go:117] "RemoveContainer" containerID="623fd3bb395489070cfbb337878cd1e15ab3d971bac2e599888eea8b86e983bb" Feb 17 15:47:51.642493 master-0 kubenswrapper[26425]: I0217 15:47:51.638120 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.678117 master-0 kubenswrapper[26425]: I0217 15:47:51.678053 26425 scope.go:117] "RemoveContainer" containerID="8fe35b3eb634102949fd6eea1577f16c39b59a613ed4399ec622f4b722788d04" Feb 17 15:47:51.699835 master-0 kubenswrapper[26425]: I0217 15:47:51.699753 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:51.771914 master-0 kubenswrapper[26425]: I0217 15:47:51.771859 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:51.789546 master-0 kubenswrapper[26425]: I0217 15:47:51.789345 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:51.790116 master-0 kubenswrapper[26425]: E0217 15:47:51.790084 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="cinder-volume" Feb 17 15:47:51.790116 master-0 kubenswrapper[26425]: I0217 15:47:51.790110 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="cinder-volume" Feb 17 15:47:51.790200 master-0 kubenswrapper[26425]: E0217 15:47:51.790159 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="probe" Feb 17 15:47:51.790200 master-0 kubenswrapper[26425]: I0217 15:47:51.790169 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="probe" Feb 17 15:47:51.790489 master-0 kubenswrapper[26425]: I0217 15:47:51.790420 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="cinder-volume" Feb 17 15:47:51.790668 master-0 kubenswrapper[26425]: I0217 15:47:51.790634 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" containerName="probe" Feb 17 15:47:51.792215 master-0 kubenswrapper[26425]: I0217 15:47:51.792174 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.794574 master-0 kubenswrapper[26425]: I0217 15:47:51.794425 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-04ef3-volume-lvm-iscsi-config-data" Feb 17 15:47:51.803008 master-0 kubenswrapper[26425]: I0217 15:47:51.802960 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:51.822720 master-0 kubenswrapper[26425]: I0217 15:47:51.822673 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-backup-0"] Feb 17 15:47:51.865213 master-0 kubenswrapper[26425]: I0217 15:47:51.865051 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-scheduler-0"] Feb 17 15:47:51.870327 master-0 kubenswrapper[26425]: W0217 15:47:51.870273 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5293ec_1a53_45ee_aa5d_24508500f0e5.slice/crio-6d26ef6014ba32c3d5825f89f604ad6f3406c3b772ae68b91537fb7caf546019 WatchSource:0}: Error finding container 6d26ef6014ba32c3d5825f89f604ad6f3406c3b772ae68b91537fb7caf546019: Status 404 returned error can't find the container with id 6d26ef6014ba32c3d5825f89f604ad6f3406c3b772ae68b91537fb7caf546019 Feb 17 15:47:51.922059 master-0 kubenswrapper[26425]: I0217 15:47:51.922001 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922076 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922097 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5xs\" (UniqueName: \"kubernetes.io/projected/2f035e02-4652-439c-8d8c-8d60789de477-kube-api-access-kj5xs\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922119 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922164 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922215 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922239 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922266 master-0 kubenswrapper[26425]: I0217 15:47:51.922265 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922591 master-0 kubenswrapper[26425]: I0217 15:47:51.922335 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922591 master-0 kubenswrapper[26425]: I0217 15:47:51.922538 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922591 master-0 kubenswrapper[26425]: I0217 15:47:51.922586 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922722 master-0 kubenswrapper[26425]: I0217 15:47:51.922700 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922833 master-0 kubenswrapper[26425]: I0217 15:47:51.922806 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.922903 master-0 kubenswrapper[26425]: I0217 15:47:51.922885 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:51.923233 master-0 kubenswrapper[26425]: I0217 15:47:51.923184 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026001 master-0 kubenswrapper[26425]: I0217 15:47:52.025928 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026177 master-0 kubenswrapper[26425]: I0217 15:47:52.026008 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026177 master-0 kubenswrapper[26425]: I0217 15:47:52.026073 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026352 master-0 kubenswrapper[26425]: I0217 15:47:52.026301 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026434 master-0 kubenswrapper[26425]: I0217 15:47:52.026405 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-lib-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026434 master-0 kubenswrapper[26425]: I0217 15:47:52.026415 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026650 master-0 kubenswrapper[26425]: I0217 15:47:52.026596 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-machine-id\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026650 master-0 kubenswrapper[26425]: I0217 15:47:52.026628 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026749 master-0 kubenswrapper[26425]: I0217 15:47:52.026701 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026802 master-0 kubenswrapper[26425]: I0217 15:47:52.026776 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026849 master-0 kubenswrapper[26425]: I0217 15:47:52.026801 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj5xs\" (UniqueName: \"kubernetes.io/projected/2f035e02-4652-439c-8d8c-8d60789de477-kube-api-access-kj5xs\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026849 master-0 kubenswrapper[26425]: I0217 15:47:52.026828 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026935 master-0 kubenswrapper[26425]: I0217 15:47:52.026896 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-dev\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.026982 master-0 kubenswrapper[26425]: I0217 15:47:52.026932 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027024 master-0 kubenswrapper[26425]: I0217 15:47:52.027001 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027067 master-0 kubenswrapper[26425]: I0217 15:47:52.027042 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027109 master-0 kubenswrapper[26425]: I0217 15:47:52.027082 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027109 master-0 kubenswrapper[26425]: I0217 15:47:52.027098 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027289 master-0 kubenswrapper[26425]: I0217 15:47:52.026933 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-iscsi\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027289 master-0 kubenswrapper[26425]: I0217 15:47:52.027265 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-etc-nvme\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027856 master-0 kubenswrapper[26425]: I0217 15:47:52.027799 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-brick\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.027856 master-0 kubenswrapper[26425]: I0217 15:47:52.027853 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-sys\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.028417 master-0 kubenswrapper[26425]: I0217 15:47:52.028366 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-var-locks-cinder\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.028417 master-0 kubenswrapper[26425]: I0217 15:47:52.028380 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-lib-modules\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.028417 master-0 kubenswrapper[26425]: I0217 15:47:52.028393 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2f035e02-4652-439c-8d8c-8d60789de477-run\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.030883 master-0 kubenswrapper[26425]: I0217 15:47:52.030838 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data-custom\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.035943 master-0 kubenswrapper[26425]: I0217 15:47:52.035876 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-combined-ca-bundle\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.041978 master-0 kubenswrapper[26425]: I0217 15:47:52.041917 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-scripts\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.045323 master-0 kubenswrapper[26425]: I0217 15:47:52.045107 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f035e02-4652-439c-8d8c-8d60789de477-config-data\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.053452 master-0 kubenswrapper[26425]: I0217 15:47:52.053404 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj5xs\" (UniqueName: \"kubernetes.io/projected/2f035e02-4652-439c-8d8c-8d60789de477-kube-api-access-kj5xs\") pod \"cinder-04ef3-volume-lvm-iscsi-0\" (UID: \"2f035e02-4652-439c-8d8c-8d60789de477\") " pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.292181 master-0 kubenswrapper[26425]: I0217 15:47:52.292112 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:52.409788 master-0 kubenswrapper[26425]: I0217 15:47:52.409737 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a705f4-efff-4bbb-8609-7c418e5d83f6" path="/var/lib/kubelet/pods/67a705f4-efff-4bbb-8609-7c418e5d83f6/volumes" Feb 17 15:47:52.410369 master-0 kubenswrapper[26425]: I0217 15:47:52.410343 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889ad0c0-9053-4c32-8dbf-17e35278ca01" path="/var/lib/kubelet/pods/889ad0c0-9053-4c32-8dbf-17e35278ca01/volumes" Feb 17 15:47:52.411063 master-0 kubenswrapper[26425]: I0217 15:47:52.411035 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8" path="/var/lib/kubelet/pods/a210aa73-48bf-4bb7-a5b7-53c0cd59b1f8/volumes" Feb 17 15:47:52.676171 master-0 kubenswrapper[26425]: I0217 15:47:52.676107 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"db5293ec-1a53-45ee-aa5d-24508500f0e5","Type":"ContainerStarted","Data":"d6d3d3322d0ada0e6c969144a984c883dbcd855565e29f9ea58cd30fe804154a"} Feb 17 15:47:52.676171 master-0 kubenswrapper[26425]: I0217 15:47:52.676166 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"db5293ec-1a53-45ee-aa5d-24508500f0e5","Type":"ContainerStarted","Data":"6d26ef6014ba32c3d5825f89f604ad6f3406c3b772ae68b91537fb7caf546019"} Feb 17 15:47:52.679375 master-0 kubenswrapper[26425]: I0217 15:47:52.679312 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"4a92c319-f3b7-42b0-a51e-e986684d811b","Type":"ContainerStarted","Data":"e9e5c841b1f8eb384d0d781d3569cf6c1105ec59e2c07a9777c4250405429229"} Feb 17 15:47:52.679433 master-0 kubenswrapper[26425]: I0217 15:47:52.679373 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"4a92c319-f3b7-42b0-a51e-e986684d811b","Type":"ContainerStarted","Data":"e9692fb30d8163f0bd7b2d80f62ec065d3878a3451c0a23a8703fd043e62d811"} Feb 17 15:47:52.679433 master-0 kubenswrapper[26425]: I0217 15:47:52.679389 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-backup-0" event={"ID":"4a92c319-f3b7-42b0-a51e-e986684d811b","Type":"ContainerStarted","Data":"ed7bccaa85ff77a2e79ff9e7cca382340c8fb0a413ca9f5ef1fbe147f22e91b6"} Feb 17 15:47:52.705541 master-0 kubenswrapper[26425]: I0217 15:47:52.705477 26425 generic.go:334] "Generic (PLEG): container finished" podID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerID="e11f1900c268d0c56f6662e9d2994680bba2a762c92975b43117920ae0e0c212" exitCode=0 Feb 17 15:47:52.706299 master-0 kubenswrapper[26425]: I0217 15:47:52.706250 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-8zl8z" event={"ID":"87f5e945-543a-4858-b5f8-7e33a1a22459","Type":"ContainerDied","Data":"e11f1900c268d0c56f6662e9d2994680bba2a762c92975b43117920ae0e0c212"} Feb 17 15:47:52.775555 master-0 kubenswrapper[26425]: I0217 15:47:52.766334 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-backup-0" podStartSLOduration=2.766310871 podStartE2EDuration="2.766310871s" podCreationTimestamp="2026-02-17 15:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:52.733232137 +0000 UTC m=+1934.624955965" watchObservedRunningTime="2026-02-17 15:47:52.766310871 +0000 UTC m=+1934.658034689" Feb 17 15:47:52.854597 master-0 kubenswrapper[26425]: I0217 15:47:52.853719 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ef3-volume-lvm-iscsi-0"] Feb 17 15:47:53.727793 master-0 kubenswrapper[26425]: I0217 15:47:53.727711 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"2f035e02-4652-439c-8d8c-8d60789de477","Type":"ContainerStarted","Data":"09bd705695221e4b1b6b3c09bd9f277be3adc837522b26299eab5ca5481f8c82"} Feb 17 15:47:53.727793 master-0 kubenswrapper[26425]: I0217 15:47:53.727788 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"2f035e02-4652-439c-8d8c-8d60789de477","Type":"ContainerStarted","Data":"7bcd861def6325be6e83dd301e945d1fa8b3281da454979765209b812f91b326"} Feb 17 15:47:53.729045 master-0 kubenswrapper[26425]: I0217 15:47:53.727805 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" event={"ID":"2f035e02-4652-439c-8d8c-8d60789de477","Type":"ContainerStarted","Data":"a62fbd3be97f662bde7b95475f928288c48f15e873da6440b714644a9ea72a22"} Feb 17 15:47:53.732807 master-0 kubenswrapper[26425]: I0217 15:47:53.732749 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ef3-scheduler-0" event={"ID":"db5293ec-1a53-45ee-aa5d-24508500f0e5","Type":"ContainerStarted","Data":"3bc177cb5783964fd3fad9e82731e7175b9cfa8126242d436d31e00b9aa1e32f"} Feb 17 15:47:53.831644 master-0 kubenswrapper[26425]: I0217 15:47:53.830410 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" podStartSLOduration=2.830382099 podStartE2EDuration="2.830382099s" podCreationTimestamp="2026-02-17 15:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:53.763409673 +0000 UTC m=+1935.655133501" watchObservedRunningTime="2026-02-17 15:47:53.830382099 +0000 UTC m=+1935.722105927" Feb 17 15:47:53.991936 master-0 kubenswrapper[26425]: I0217 15:47:53.991306 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ef3-scheduler-0" podStartSLOduration=3.991286487 podStartE2EDuration="3.991286487s" podCreationTimestamp="2026-02-17 15:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:47:53.97389147 +0000 UTC m=+1935.865615288" watchObservedRunningTime="2026-02-17 15:47:53.991286487 +0000 UTC m=+1935.883010305" Feb 17 15:47:54.309516 master-0 kubenswrapper[26425]: I0217 15:47:54.309401 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:54.436067 master-0 kubenswrapper[26425]: I0217 15:47:54.435961 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkpf\" (UniqueName: \"kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436322 master-0 kubenswrapper[26425]: I0217 15:47:54.436102 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436322 master-0 kubenswrapper[26425]: I0217 15:47:54.436221 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436322 master-0 kubenswrapper[26425]: I0217 15:47:54.436279 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436322 master-0 kubenswrapper[26425]: I0217 15:47:54.436308 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436528 master-0 kubenswrapper[26425]: I0217 15:47:54.436384 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data\") pod \"87f5e945-543a-4858-b5f8-7e33a1a22459\" (UID: \"87f5e945-543a-4858-b5f8-7e33a1a22459\") " Feb 17 15:47:54.436851 master-0 kubenswrapper[26425]: I0217 15:47:54.436805 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:47:54.438407 master-0 kubenswrapper[26425]: I0217 15:47:54.438324 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.440546 master-0 kubenswrapper[26425]: I0217 15:47:54.440370 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 15:47:54.441398 master-0 kubenswrapper[26425]: I0217 15:47:54.441366 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf" (OuterVolumeSpecName: "kube-api-access-vtkpf") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "kube-api-access-vtkpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:54.443966 master-0 kubenswrapper[26425]: I0217 15:47:54.443943 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts" (OuterVolumeSpecName: "scripts") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:54.479264 master-0 kubenswrapper[26425]: I0217 15:47:54.479194 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data" (OuterVolumeSpecName: "config-data") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:54.519477 master-0 kubenswrapper[26425]: I0217 15:47:54.516324 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87f5e945-543a-4858-b5f8-7e33a1a22459" (UID: "87f5e945-543a-4858-b5f8-7e33a1a22459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:47:54.541955 master-0 kubenswrapper[26425]: I0217 15:47:54.541591 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkpf\" (UniqueName: \"kubernetes.io/projected/87f5e945-543a-4858-b5f8-7e33a1a22459-kube-api-access-vtkpf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.541955 master-0 kubenswrapper[26425]: I0217 15:47:54.541637 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.541955 master-0 kubenswrapper[26425]: I0217 15:47:54.541653 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.541955 master-0 kubenswrapper[26425]: I0217 15:47:54.541668 26425 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/87f5e945-543a-4858-b5f8-7e33a1a22459-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.541955 master-0 kubenswrapper[26425]: I0217 15:47:54.541683 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f5e945-543a-4858-b5f8-7e33a1a22459-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:54.762201 master-0 kubenswrapper[26425]: I0217 15:47:54.761902 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-8zl8z" Feb 17 15:47:54.762201 master-0 kubenswrapper[26425]: I0217 15:47:54.761917 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-8zl8z" event={"ID":"87f5e945-543a-4858-b5f8-7e33a1a22459","Type":"ContainerDied","Data":"90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0"} Feb 17 15:47:54.762201 master-0 kubenswrapper[26425]: I0217 15:47:54.762037 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d8738b2409ab6bad217db6066d6538d7f1a0eb069408c1ec66d86f3b3fc2b0" Feb 17 15:47:55.672543 master-0 kubenswrapper[26425]: I0217 15:47:55.671406 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-vmh7f"] Feb 17 15:47:55.714273 master-0 kubenswrapper[26425]: E0217 15:47:55.713250 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerName="init" Feb 17 15:47:55.714273 master-0 kubenswrapper[26425]: I0217 15:47:55.713304 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerName="init" Feb 17 15:47:55.714273 master-0 kubenswrapper[26425]: E0217 15:47:55.713329 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerName="ironic-db-sync" Feb 17 15:47:55.714273 master-0 kubenswrapper[26425]: I0217 15:47:55.713337 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerName="ironic-db-sync" Feb 17 15:47:55.731778 master-0 kubenswrapper[26425]: I0217 15:47:55.731656 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" containerName="ironic-db-sync" Feb 17 15:47:55.750599 master-0 kubenswrapper[26425]: I0217 15:47:55.748363 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-vmh7f"] Feb 17 15:47:55.750599 master-0 kubenswrapper[26425]: I0217 15:47:55.748529 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:55.890483 master-0 kubenswrapper[26425]: I0217 15:47:55.890054 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzrn\" (UniqueName: \"kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:55.890483 master-0 kubenswrapper[26425]: I0217 15:47:55.890423 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:55.996341 master-0 kubenswrapper[26425]: I0217 15:47:55.993830 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:55.996341 master-0 kubenswrapper[26425]: I0217 15:47:55.993933 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzrn\" (UniqueName: \"kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:55.996341 master-0 kubenswrapper[26425]: I0217 15:47:55.994936 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:56.014480 master-0 kubenswrapper[26425]: I0217 15:47:56.013100 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:47:56.014480 master-0 kubenswrapper[26425]: I0217 15:47:56.013422 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="dnsmasq-dns" containerID="cri-o://08570b0bb72d0354c2b7168c6cfb3cce5c6760ce82ca47228797273c2f9275a6" gracePeriod=10 Feb 17 15:47:56.018494 master-0 kubenswrapper[26425]: I0217 15:47:56.015804 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:56.039201 master-0 kubenswrapper[26425]: I0217 15:47:56.037786 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.230:5353: connect: connection refused" Feb 17 15:47:56.147477 master-0 kubenswrapper[26425]: I0217 15:47:56.147324 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:47:56.210391 master-0 kubenswrapper[26425]: I0217 15:47:56.210306 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:47:56.564572 master-0 kubenswrapper[26425]: I0217 15:47:56.560475 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-016b-account-create-update-v8zdc"] Feb 17 15:47:56.564572 master-0 kubenswrapper[26425]: I0217 15:47:56.562012 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.564572 master-0 kubenswrapper[26425]: I0217 15:47:56.563189 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzrn\" (UniqueName: \"kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn\") pod \"ironic-inspector-db-create-vmh7f\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:56.591851 master-0 kubenswrapper[26425]: I0217 15:47:56.588589 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 17 15:47:56.611381 master-0 kubenswrapper[26425]: I0217 15:47:56.611328 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.611725 master-0 kubenswrapper[26425]: I0217 15:47:56.611706 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfq7h\" (UniqueName: \"kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.613038 master-0 kubenswrapper[26425]: I0217 15:47:56.613017 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:47:56.617774 master-0 kubenswrapper[26425]: I0217 15:47:56.617742 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.633877 master-0 kubenswrapper[26425]: I0217 15:47:56.633821 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-016b-account-create-update-v8zdc"] Feb 17 15:47:56.643534 master-0 kubenswrapper[26425]: I0217 15:47:56.643501 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:47:56.678626 master-0 kubenswrapper[26425]: I0217 15:47:56.678578 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:47:56.714585 master-0 kubenswrapper[26425]: I0217 15:47:56.714526 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.714771 master-0 kubenswrapper[26425]: I0217 15:47:56.714676 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.715062 master-0 kubenswrapper[26425]: I0217 15:47:56.714798 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49dz\" (UniqueName: \"kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.715062 master-0 kubenswrapper[26425]: I0217 15:47:56.714955 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.715347 master-0 kubenswrapper[26425]: I0217 15:47:56.715115 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.715347 master-0 kubenswrapper[26425]: I0217 15:47:56.715209 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-88dd96889-vwkh6"] Feb 17 15:47:56.716386 master-0 kubenswrapper[26425]: I0217 15:47:56.715389 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.716386 master-0 kubenswrapper[26425]: I0217 15:47:56.715429 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfq7h\" (UniqueName: \"kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.716386 master-0 kubenswrapper[26425]: I0217 15:47:56.715575 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.716386 master-0 kubenswrapper[26425]: I0217 15:47:56.715819 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:56.717213 master-0 kubenswrapper[26425]: I0217 15:47:56.717179 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.719844 master-0 kubenswrapper[26425]: I0217 15:47:56.719825 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.817880 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.817983 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrrk\" (UniqueName: \"kubernetes.io/projected/ea8f52d0-e4bb-4457-b7f7-33133e152096-kube-api-access-dtrrk\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818034 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-combined-ca-bundle\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818087 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818156 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818253 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818336 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-config\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818436 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49dz\" (UniqueName: \"kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.819209 master-0 kubenswrapper[26425]: I0217 15:47:56.818555 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.820996 master-0 kubenswrapper[26425]: I0217 15:47:56.820956 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.821956 master-0 kubenswrapper[26425]: I0217 15:47:56.821934 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.823132 master-0 kubenswrapper[26425]: I0217 15:47:56.823094 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.824409 master-0 kubenswrapper[26425]: I0217 15:47:56.824378 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.825861 master-0 kubenswrapper[26425]: I0217 15:47:56.825837 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:56.886484 master-0 kubenswrapper[26425]: I0217 15:47:56.885667 26425 generic.go:334] "Generic (PLEG): container finished" podID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerID="08570b0bb72d0354c2b7168c6cfb3cce5c6760ce82ca47228797273c2f9275a6" exitCode=0 Feb 17 15:47:56.886484 master-0 kubenswrapper[26425]: I0217 15:47:56.885714 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" event={"ID":"9c0c18df-1767-4810-ad4b-2b954d38e60f","Type":"ContainerDied","Data":"08570b0bb72d0354c2b7168c6cfb3cce5c6760ce82ca47228797273c2f9275a6"} Feb 17 15:47:56.926876 master-0 kubenswrapper[26425]: I0217 15:47:56.921497 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-config\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.926876 master-0 kubenswrapper[26425]: I0217 15:47:56.921659 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrrk\" (UniqueName: \"kubernetes.io/projected/ea8f52d0-e4bb-4457-b7f7-33133e152096-kube-api-access-dtrrk\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.926876 master-0 kubenswrapper[26425]: I0217 15:47:56.921688 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-combined-ca-bundle\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.944483 master-0 kubenswrapper[26425]: I0217 15:47:56.938768 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-combined-ca-bundle\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:56.944483 master-0 kubenswrapper[26425]: I0217 15:47:56.941059 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea8f52d0-e4bb-4457-b7f7-33133e152096-config\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:57.015906 master-0 kubenswrapper[26425]: I0217 15:47:57.015147 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrrk\" (UniqueName: \"kubernetes.io/projected/ea8f52d0-e4bb-4457-b7f7-33133e152096-kube-api-access-dtrrk\") pod \"ironic-neutron-agent-88dd96889-vwkh6\" (UID: \"ea8f52d0-e4bb-4457-b7f7-33133e152096\") " pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:57.017165 master-0 kubenswrapper[26425]: I0217 15:47:57.017046 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfq7h\" (UniqueName: \"kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h\") pod \"ironic-inspector-016b-account-create-update-v8zdc\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:57.027487 master-0 kubenswrapper[26425]: I0217 15:47:57.024435 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49dz\" (UniqueName: \"kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz\") pod \"dnsmasq-dns-6b9c77ddfc-d9zgc\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:57.049801 master-0 kubenswrapper[26425]: I0217 15:47:57.049744 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-88dd96889-vwkh6"] Feb 17 15:47:57.128156 master-0 kubenswrapper[26425]: I0217 15:47:57.128103 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:47:57.173096 master-0 kubenswrapper[26425]: I0217 15:47:57.173047 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:57.221490 master-0 kubenswrapper[26425]: I0217 15:47:57.219130 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:47:57.222520 master-0 kubenswrapper[26425]: I0217 15:47:57.222169 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.226575 master-0 kubenswrapper[26425]: I0217 15:47:57.226497 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 17 15:47:57.227971 master-0 kubenswrapper[26425]: I0217 15:47:57.227014 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 17 15:47:57.227971 master-0 kubenswrapper[26425]: I0217 15:47:57.227284 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 17 15:47:57.227971 master-0 kubenswrapper[26425]: I0217 15:47:57.227699 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 17 15:47:57.227971 master-0 kubenswrapper[26425]: I0217 15:47:57.227961 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 15:47:57.244929 master-0 kubenswrapper[26425]: I0217 15:47:57.242624 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.244929 master-0 kubenswrapper[26425]: I0217 15:47:57.242782 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.244929 master-0 kubenswrapper[26425]: I0217 15:47:57.242829 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.248693 master-0 kubenswrapper[26425]: I0217 15:47:57.248563 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:47:57.255018 master-0 kubenswrapper[26425]: I0217 15:47:57.254969 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:47:57.271863 master-0 kubenswrapper[26425]: I0217 15:47:57.269257 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:47:57.292502 master-0 kubenswrapper[26425]: I0217 15:47:57.292451 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:47:57.379659 master-0 kubenswrapper[26425]: I0217 15:47:57.379537 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.379957 master-0 kubenswrapper[26425]: I0217 15:47:57.379937 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380075 master-0 kubenswrapper[26425]: I0217 15:47:57.380055 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8c7\" (UniqueName: \"kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380365 master-0 kubenswrapper[26425]: I0217 15:47:57.380348 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380492 master-0 kubenswrapper[26425]: I0217 15:47:57.380478 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380580 master-0 kubenswrapper[26425]: I0217 15:47:57.380567 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380694 master-0 kubenswrapper[26425]: I0217 15:47:57.380681 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.380837 master-0 kubenswrapper[26425]: I0217 15:47:57.380823 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.393131 master-0 kubenswrapper[26425]: I0217 15:47:57.391044 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.410999 master-0 kubenswrapper[26425]: I0217 15:47:57.410967 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.418914 master-0 kubenswrapper[26425]: I0217 15:47:57.415102 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.482238 master-0 kubenswrapper[26425]: I0217 15:47:57.481645 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.482238 master-0 kubenswrapper[26425]: I0217 15:47:57.481698 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.482238 master-0 kubenswrapper[26425]: I0217 15:47:57.481720 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.482238 master-0 kubenswrapper[26425]: I0217 15:47:57.481774 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.482238 master-0 kubenswrapper[26425]: I0217 15:47:57.481824 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8c7\" (UniqueName: \"kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.485094 master-0 kubenswrapper[26425]: I0217 15:47:57.484691 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-vmh7f"] Feb 17 15:47:57.486288 master-0 kubenswrapper[26425]: I0217 15:47:57.486262 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.494568 master-0 kubenswrapper[26425]: I0217 15:47:57.494098 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.516182 master-0 kubenswrapper[26425]: I0217 15:47:57.516114 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.516669 master-0 kubenswrapper[26425]: I0217 15:47:57.516650 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.540981 master-0 kubenswrapper[26425]: I0217 15:47:57.540632 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8c7\" (UniqueName: \"kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7\") pod \"ironic-7b6b8d45d-l4pv4\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.579412 master-0 kubenswrapper[26425]: I0217 15:47:57.567210 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:47:57.597314 master-0 kubenswrapper[26425]: I0217 15:47:57.596777 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:47:57.884396 master-0 kubenswrapper[26425]: I0217 15:47:57.882163 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 17 15:47:57.887713 master-0 kubenswrapper[26425]: I0217 15:47:57.886244 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 17 15:47:57.890715 master-0 kubenswrapper[26425]: I0217 15:47:57.888826 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 17 15:47:57.890715 master-0 kubenswrapper[26425]: I0217 15:47:57.888952 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 17 15:47:57.900399 master-0 kubenswrapper[26425]: I0217 15:47:57.900343 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 17 15:47:57.921066 master-0 kubenswrapper[26425]: I0217 15:47:57.921037 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-04ef3-api-0" Feb 17 15:47:57.924477 master-0 kubenswrapper[26425]: I0217 15:47:57.923236 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" event={"ID":"9c0c18df-1767-4810-ad4b-2b954d38e60f","Type":"ContainerDied","Data":"d401a20319eb4547778c06586daa723d7a92f941faa216db6235df042cd6a0e4"} Feb 17 15:47:57.924477 master-0 kubenswrapper[26425]: I0217 15:47:57.923289 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d401a20319eb4547778c06586daa723d7a92f941faa216db6235df042cd6a0e4" Feb 17 15:47:57.926582 master-0 kubenswrapper[26425]: I0217 15:47:57.925244 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vmh7f" event={"ID":"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e","Type":"ContainerStarted","Data":"c305105b1c906f18676c5dd6928b6886d6100d3e01a06de8e3fff4077261273c"} Feb 17 15:47:57.956711 master-0 kubenswrapper[26425]: I0217 15:47:57.956618 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:58.104379 master-0 kubenswrapper[26425]: I0217 15:47:58.104302 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.104705 master-0 kubenswrapper[26425]: I0217 15:47:58.104668 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.104705 master-0 kubenswrapper[26425]: I0217 15:47:58.104709 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.104832 master-0 kubenswrapper[26425]: I0217 15:47:58.104754 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.104889 master-0 kubenswrapper[26425]: I0217 15:47:58.104841 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.104889 master-0 kubenswrapper[26425]: I0217 15:47:58.104869 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgptn\" (UniqueName: \"kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn\") pod \"9c0c18df-1767-4810-ad4b-2b954d38e60f\" (UID: \"9c0c18df-1767-4810-ad4b-2b954d38e60f\") " Feb 17 15:47:58.105668 master-0 kubenswrapper[26425]: I0217 15:47:58.105244 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrzc\" (UniqueName: \"kubernetes.io/projected/1c26c340-473b-49c9-a62f-1915fac7b655-kube-api-access-knrzc\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105668 master-0 kubenswrapper[26425]: I0217 15:47:58.105320 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-scripts\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105668 master-0 kubenswrapper[26425]: I0217 15:47:58.105359 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105668 master-0 kubenswrapper[26425]: I0217 15:47:58.105490 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105668 master-0 kubenswrapper[26425]: I0217 15:47:58.105537 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61b8ec08-c1ae-4dfd-b80c-a05eee1e3066\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e93eaad8-76bb-462c-8564-667a1496705f\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105933 master-0 kubenswrapper[26425]: I0217 15:47:58.105681 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.105933 master-0 kubenswrapper[26425]: I0217 15:47:58.105894 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c26c340-473b-49c9-a62f-1915fac7b655-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.106020 master-0 kubenswrapper[26425]: I0217 15:47:58.105960 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.109034 master-0 kubenswrapper[26425]: I0217 15:47:58.108981 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn" (OuterVolumeSpecName: "kube-api-access-wgptn") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "kube-api-access-wgptn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:47:58.173324 master-0 kubenswrapper[26425]: I0217 15:47:58.173121 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config" (OuterVolumeSpecName: "config") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:58.201089 master-0 kubenswrapper[26425]: I0217 15:47:58.201006 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:58.208132 master-0 kubenswrapper[26425]: I0217 15:47:58.208083 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrzc\" (UniqueName: \"kubernetes.io/projected/1c26c340-473b-49c9-a62f-1915fac7b655-kube-api-access-knrzc\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208507 master-0 kubenswrapper[26425]: I0217 15:47:58.208485 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-scripts\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208588 master-0 kubenswrapper[26425]: I0217 15:47:58.208525 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208637 master-0 kubenswrapper[26425]: I0217 15:47:58.208610 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208773 master-0 kubenswrapper[26425]: I0217 15:47:58.208685 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208825 master-0 kubenswrapper[26425]: I0217 15:47:58.208777 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c26c340-473b-49c9-a62f-1915fac7b655-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.208825 master-0 kubenswrapper[26425]: I0217 15:47:58.208811 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.209042 master-0 kubenswrapper[26425]: I0217 15:47:58.208921 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.209042 master-0 kubenswrapper[26425]: I0217 15:47:58.208938 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.209042 master-0 kubenswrapper[26425]: I0217 15:47:58.208952 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgptn\" (UniqueName: \"kubernetes.io/projected/9c0c18df-1767-4810-ad4b-2b954d38e60f-kube-api-access-wgptn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.210964 master-0 kubenswrapper[26425]: I0217 15:47:58.210854 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.212498 master-0 kubenswrapper[26425]: I0217 15:47:58.212419 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-scripts\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.213633 master-0 kubenswrapper[26425]: I0217 15:47:58.213582 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c26c340-473b-49c9-a62f-1915fac7b655-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.214146 master-0 kubenswrapper[26425]: I0217 15:47:58.214119 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.215672 master-0 kubenswrapper[26425]: I0217 15:47:58.215630 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.218000 master-0 kubenswrapper[26425]: I0217 15:47:58.217970 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c26c340-473b-49c9-a62f-1915fac7b655-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.219781 master-0 kubenswrapper[26425]: I0217 15:47:58.219730 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:58.225074 master-0 kubenswrapper[26425]: I0217 15:47:58.225025 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:58.241614 master-0 kubenswrapper[26425]: I0217 15:47:58.241544 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c0c18df-1767-4810-ad4b-2b954d38e60f" (UID: "9c0c18df-1767-4810-ad4b-2b954d38e60f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.363030 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61b8ec08-c1ae-4dfd-b80c-a05eee1e3066\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e93eaad8-76bb-462c-8564-667a1496705f\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.363368 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.363384 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.363421 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c0c18df-1767-4810-ad4b-2b954d38e60f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.365911 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:47:58.400382 master-0 kubenswrapper[26425]: I0217 15:47:58.365942 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61b8ec08-c1ae-4dfd-b80c-a05eee1e3066\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e93eaad8-76bb-462c-8564-667a1496705f\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2d3b71a1ff9ae7c18337e389eebd95de07e46d25913d1f0d77154338689ab98c/globalmount\"" pod="openstack/ironic-conductor-0" Feb 17 15:47:58.568777 master-0 kubenswrapper[26425]: I0217 15:47:58.567181 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-016b-account-create-update-v8zdc"] Feb 17 15:47:58.582845 master-0 kubenswrapper[26425]: I0217 15:47:58.582301 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-88dd96889-vwkh6"] Feb 17 15:47:58.606134 master-0 kubenswrapper[26425]: I0217 15:47:58.606077 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrzc\" (UniqueName: \"kubernetes.io/projected/1c26c340-473b-49c9-a62f-1915fac7b655-kube-api-access-knrzc\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:58.940424 master-0 kubenswrapper[26425]: I0217 15:47:58.940282 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" event={"ID":"ca551755-0560-44aa-b5f9-3e9bfc9984af","Type":"ContainerStarted","Data":"4bbcf0c69284bef768b3a8f66af41bec88edfb7f77fb63bf9e422c435357e1fd"} Feb 17 15:47:58.943449 master-0 kubenswrapper[26425]: I0217 15:47:58.943387 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerStarted","Data":"e56a96daaf9577422d2a8874bcee49a764899c7e0fa9c894d828ecd3ee765121"} Feb 17 15:47:58.945012 master-0 kubenswrapper[26425]: I0217 15:47:58.944977 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vmh7f" event={"ID":"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e","Type":"ContainerStarted","Data":"84df170bf27564f44d0dbc24c00f5ced3ae912d748647862b5d60374038b8fd0"} Feb 17 15:47:58.945079 master-0 kubenswrapper[26425]: I0217 15:47:58.945031 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-f69kf" Feb 17 15:47:59.821625 master-0 kubenswrapper[26425]: I0217 15:47:59.820481 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:47:59.834922 master-0 kubenswrapper[26425]: I0217 15:47:59.834844 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:47:59.990154 master-0 kubenswrapper[26425]: I0217 15:47:59.990062 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61b8ec08-c1ae-4dfd-b80c-a05eee1e3066\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e93eaad8-76bb-462c-8564-667a1496705f\") pod \"ironic-conductor-0\" (UID: \"1c26c340-473b-49c9-a62f-1915fac7b655\") " pod="openstack/ironic-conductor-0" Feb 17 15:47:59.998446 master-0 kubenswrapper[26425]: I0217 15:47:59.998385 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" event={"ID":"ca551755-0560-44aa-b5f9-3e9bfc9984af","Type":"ContainerStarted","Data":"175440b5d051ec5ecc23fe38127fe38ac3f6e39814c6637d0a0b21a8990aa777"} Feb 17 15:48:00.002105 master-0 kubenswrapper[26425]: I0217 15:48:00.002054 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" event={"ID":"5af5f023-f51c-448d-9df7-d4e9ec69ca7e","Type":"ContainerStarted","Data":"3e3eb754ec6736e360bb3284f6665d84bdaca1a1fe0e55b6c671b428bf38e288"} Feb 17 15:48:00.003999 master-0 kubenswrapper[26425]: I0217 15:48:00.003955 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerStarted","Data":"8902f6502f4e47c5b2266a6cdead4e5cf322d0e34d043d65a4d827c26d38a316"} Feb 17 15:48:00.006856 master-0 kubenswrapper[26425]: I0217 15:48:00.005762 26425 generic.go:334] "Generic (PLEG): container finished" podID="80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" containerID="84df170bf27564f44d0dbc24c00f5ced3ae912d748647862b5d60374038b8fd0" exitCode=0 Feb 17 15:48:00.006856 master-0 kubenswrapper[26425]: I0217 15:48:00.005802 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vmh7f" event={"ID":"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e","Type":"ContainerDied","Data":"84df170bf27564f44d0dbc24c00f5ced3ae912d748647862b5d60374038b8fd0"} Feb 17 15:48:00.086143 master-0 kubenswrapper[26425]: I0217 15:48:00.084129 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 17 15:48:00.099270 master-0 kubenswrapper[26425]: I0217 15:48:00.098766 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-vmh7f" podStartSLOduration=5.09860212 podStartE2EDuration="5.09860212s" podCreationTimestamp="2026-02-17 15:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:00.080398683 +0000 UTC m=+1941.972122511" watchObservedRunningTime="2026-02-17 15:48:00.09860212 +0000 UTC m=+1941.990325948" Feb 17 15:48:00.180917 master-0 kubenswrapper[26425]: I0217 15:48:00.180806 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5fd74d8d4b-qd7wh"] Feb 17 15:48:00.181616 master-0 kubenswrapper[26425]: E0217 15:48:00.181599 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="init" Feb 17 15:48:00.181695 master-0 kubenswrapper[26425]: I0217 15:48:00.181685 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="init" Feb 17 15:48:00.182260 master-0 kubenswrapper[26425]: E0217 15:48:00.182245 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="dnsmasq-dns" Feb 17 15:48:00.182357 master-0 kubenswrapper[26425]: I0217 15:48:00.182346 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="dnsmasq-dns" Feb 17 15:48:00.182726 master-0 kubenswrapper[26425]: I0217 15:48:00.182711 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" containerName="dnsmasq-dns" Feb 17 15:48:00.183948 master-0 kubenswrapper[26425]: I0217 15:48:00.183929 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.262578 master-0 kubenswrapper[26425]: I0217 15:48:00.262520 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5fd74d8d4b-qd7wh"] Feb 17 15:48:00.310058 master-0 kubenswrapper[26425]: I0217 15:48:00.309818 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-combined-ca-bundle\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310058 master-0 kubenswrapper[26425]: I0217 15:48:00.309917 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-config-data\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310058 master-0 kubenswrapper[26425]: I0217 15:48:00.309953 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777f4777-85b4-4189-8149-46fe87965462-logs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310058 master-0 kubenswrapper[26425]: I0217 15:48:00.310014 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-internal-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310058 master-0 kubenswrapper[26425]: I0217 15:48:00.310045 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-scripts\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310489 master-0 kubenswrapper[26425]: I0217 15:48:00.310078 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nh9z\" (UniqueName: \"kubernetes.io/projected/777f4777-85b4-4189-8149-46fe87965462-kube-api-access-6nh9z\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.310489 master-0 kubenswrapper[26425]: I0217 15:48:00.310134 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-public-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.326600 master-0 kubenswrapper[26425]: I0217 15:48:00.318808 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.417817 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-scripts\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.417902 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nh9z\" (UniqueName: \"kubernetes.io/projected/777f4777-85b4-4189-8149-46fe87965462-kube-api-access-6nh9z\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.418300 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-public-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.418410 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-combined-ca-bundle\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.419213 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-config-data\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.419278 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777f4777-85b4-4189-8149-46fe87965462-logs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.419401 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-internal-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.421050 master-0 kubenswrapper[26425]: I0217 15:48:00.420242 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777f4777-85b4-4189-8149-46fe87965462-logs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.427199 master-0 kubenswrapper[26425]: I0217 15:48:00.422715 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-combined-ca-bundle\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.427199 master-0 kubenswrapper[26425]: I0217 15:48:00.423379 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-public-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.427199 master-0 kubenswrapper[26425]: I0217 15:48:00.426096 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-config-data\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.427199 master-0 kubenswrapper[26425]: I0217 15:48:00.426889 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-internal-tls-certs\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.432081 master-0 kubenswrapper[26425]: I0217 15:48:00.431976 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777f4777-85b4-4189-8149-46fe87965462-scripts\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.462617 master-0 kubenswrapper[26425]: I0217 15:48:00.460988 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-f69kf"] Feb 17 15:48:00.490071 master-0 kubenswrapper[26425]: I0217 15:48:00.490024 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nh9z\" (UniqueName: \"kubernetes.io/projected/777f4777-85b4-4189-8149-46fe87965462-kube-api-access-6nh9z\") pod \"placement-5fd74d8d4b-qd7wh\" (UID: \"777f4777-85b4-4189-8149-46fe87965462\") " pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.577911 master-0 kubenswrapper[26425]: I0217 15:48:00.574835 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:00.585719 master-0 kubenswrapper[26425]: E0217 15:48:00.585647 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af5f023_f51c_448d_9df7_d4e9ec69ca7e.slice/crio-f13447e3e0c1f00a8eef7a0f6e5dee58961401584a0e92e2d945179fa0f56c49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca551755_0560_44aa_b5f9_3e9bfc9984af.slice/crio-175440b5d051ec5ecc23fe38127fe38ac3f6e39814c6637d0a0b21a8990aa777.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:48:01.023327 master-0 kubenswrapper[26425]: I0217 15:48:01.023135 26425 generic.go:334] "Generic (PLEG): container finished" podID="ca551755-0560-44aa-b5f9-3e9bfc9984af" containerID="175440b5d051ec5ecc23fe38127fe38ac3f6e39814c6637d0a0b21a8990aa777" exitCode=0 Feb 17 15:48:01.023327 master-0 kubenswrapper[26425]: I0217 15:48:01.023262 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" event={"ID":"ca551755-0560-44aa-b5f9-3e9bfc9984af","Type":"ContainerDied","Data":"175440b5d051ec5ecc23fe38127fe38ac3f6e39814c6637d0a0b21a8990aa777"} Feb 17 15:48:01.024821 master-0 kubenswrapper[26425]: I0217 15:48:01.024789 26425 generic.go:334] "Generic (PLEG): container finished" podID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerID="f13447e3e0c1f00a8eef7a0f6e5dee58961401584a0e92e2d945179fa0f56c49" exitCode=0 Feb 17 15:48:01.024922 master-0 kubenswrapper[26425]: I0217 15:48:01.024818 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" event={"ID":"5af5f023-f51c-448d-9df7-d4e9ec69ca7e","Type":"ContainerDied","Data":"f13447e3e0c1f00a8eef7a0f6e5dee58961401584a0e92e2d945179fa0f56c49"} Feb 17 15:48:01.103356 master-0 kubenswrapper[26425]: I0217 15:48:01.102952 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 17 15:48:01.113196 master-0 kubenswrapper[26425]: I0217 15:48:01.113117 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" podStartSLOduration=6.113091049 podStartE2EDuration="6.113091049s" podCreationTimestamp="2026-02-17 15:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:01.062380444 +0000 UTC m=+1942.954104282" watchObservedRunningTime="2026-02-17 15:48:01.113091049 +0000 UTC m=+1943.004814867" Feb 17 15:48:01.192728 master-0 kubenswrapper[26425]: I0217 15:48:01.190236 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5fd74d8d4b-qd7wh"] Feb 17 15:48:01.499824 master-0 kubenswrapper[26425]: I0217 15:48:01.499760 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-backup-0" Feb 17 15:48:01.541565 master-0 kubenswrapper[26425]: W0217 15:48:01.532738 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod777f4777_85b4_4189_8149_46fe87965462.slice/crio-33f803c307582de0b1d4df521a56886664c98056f92e2d5be94e466fc240ee7d WatchSource:0}: Error finding container 33f803c307582de0b1d4df521a56886664c98056f92e2d5be94e466fc240ee7d: Status 404 returned error can't find the container with id 33f803c307582de0b1d4df521a56886664c98056f92e2d5be94e466fc240ee7d Feb 17 15:48:01.546993 master-0 kubenswrapper[26425]: W0217 15:48:01.546943 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c26c340_473b_49c9_a62f_1915fac7b655.slice/crio-4337fbb96336776a99c63e8a5d878cc24f26e55bc76fee8f0d5acb4ceb2cbf6f WatchSource:0}: Error finding container 4337fbb96336776a99c63e8a5d878cc24f26e55bc76fee8f0d5acb4ceb2cbf6f: Status 404 returned error can't find the container with id 4337fbb96336776a99c63e8a5d878cc24f26e55bc76fee8f0d5acb4ceb2cbf6f Feb 17 15:48:01.555477 master-0 kubenswrapper[26425]: I0217 15:48:01.553501 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-scheduler-0" Feb 17 15:48:01.867807 master-0 kubenswrapper[26425]: I0217 15:48:01.867172 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:48:01.991238 master-0 kubenswrapper[26425]: I0217 15:48:01.991113 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzrn\" (UniqueName: \"kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn\") pod \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " Feb 17 15:48:01.991563 master-0 kubenswrapper[26425]: I0217 15:48:01.991537 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts\") pod \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\" (UID: \"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e\") " Feb 17 15:48:01.992739 master-0 kubenswrapper[26425]: I0217 15:48:01.992707 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" (UID: "80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:02.001756 master-0 kubenswrapper[26425]: I0217 15:48:02.001698 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn" (OuterVolumeSpecName: "kube-api-access-2dzrn") pod "80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" (UID: "80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e"). InnerVolumeSpecName "kube-api-access-2dzrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:02.047913 master-0 kubenswrapper[26425]: I0217 15:48:02.047860 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"4337fbb96336776a99c63e8a5d878cc24f26e55bc76fee8f0d5acb4ceb2cbf6f"} Feb 17 15:48:02.050833 master-0 kubenswrapper[26425]: I0217 15:48:02.050624 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-vmh7f" event={"ID":"80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e","Type":"ContainerDied","Data":"c305105b1c906f18676c5dd6928b6886d6100d3e01a06de8e3fff4077261273c"} Feb 17 15:48:02.050833 master-0 kubenswrapper[26425]: I0217 15:48:02.050637 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-vmh7f" Feb 17 15:48:02.050833 master-0 kubenswrapper[26425]: I0217 15:48:02.050651 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c305105b1c906f18676c5dd6928b6886d6100d3e01a06de8e3fff4077261273c" Feb 17 15:48:02.052421 master-0 kubenswrapper[26425]: I0217 15:48:02.052381 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" event={"ID":"5af5f023-f51c-448d-9df7-d4e9ec69ca7e","Type":"ContainerStarted","Data":"6ada8af73075365b531c9b92a3e2d0ef7c549082e6f6f04db4f79bd471468556"} Feb 17 15:48:02.052681 master-0 kubenswrapper[26425]: I0217 15:48:02.052645 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:48:02.059703 master-0 kubenswrapper[26425]: I0217 15:48:02.059519 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fd74d8d4b-qd7wh" event={"ID":"777f4777-85b4-4189-8149-46fe87965462","Type":"ContainerStarted","Data":"e7136e175f9539bb95239d6ceb7fa07b2b630f5cd5d39fb169cee54740f882f7"} Feb 17 15:48:02.059703 master-0 kubenswrapper[26425]: I0217 15:48:02.059549 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fd74d8d4b-qd7wh" event={"ID":"777f4777-85b4-4189-8149-46fe87965462","Type":"ContainerStarted","Data":"33f803c307582de0b1d4df521a56886664c98056f92e2d5be94e466fc240ee7d"} Feb 17 15:48:02.079306 master-0 kubenswrapper[26425]: I0217 15:48:02.079223 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" podStartSLOduration=6.079207018 podStartE2EDuration="6.079207018s" podCreationTimestamp="2026-02-17 15:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:02.075286774 +0000 UTC m=+1943.967010612" watchObservedRunningTime="2026-02-17 15:48:02.079207018 +0000 UTC m=+1943.970930836" Feb 17 15:48:02.094252 master-0 kubenswrapper[26425]: I0217 15:48:02.094170 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:02.094252 master-0 kubenswrapper[26425]: I0217 15:48:02.094224 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzrn\" (UniqueName: \"kubernetes.io/projected/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e-kube-api-access-2dzrn\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:02.414954 master-0 kubenswrapper[26425]: I0217 15:48:02.414853 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0c18df-1767-4810-ad4b-2b954d38e60f" path="/var/lib/kubelet/pods/9c0c18df-1767-4810-ad4b-2b954d38e60f/volumes" Feb 17 15:48:02.566117 master-0 kubenswrapper[26425]: I0217 15:48:02.565992 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-04ef3-volume-lvm-iscsi-0" Feb 17 15:48:02.644616 master-0 kubenswrapper[26425]: I0217 15:48:02.644103 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:48:02.717692 master-0 kubenswrapper[26425]: I0217 15:48:02.717627 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfq7h\" (UniqueName: \"kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h\") pod \"ca551755-0560-44aa-b5f9-3e9bfc9984af\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " Feb 17 15:48:02.718316 master-0 kubenswrapper[26425]: I0217 15:48:02.717868 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts\") pod \"ca551755-0560-44aa-b5f9-3e9bfc9984af\" (UID: \"ca551755-0560-44aa-b5f9-3e9bfc9984af\") " Feb 17 15:48:02.718481 master-0 kubenswrapper[26425]: I0217 15:48:02.718247 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca551755-0560-44aa-b5f9-3e9bfc9984af" (UID: "ca551755-0560-44aa-b5f9-3e9bfc9984af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:02.719583 master-0 kubenswrapper[26425]: I0217 15:48:02.719529 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca551755-0560-44aa-b5f9-3e9bfc9984af-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:02.735495 master-0 kubenswrapper[26425]: I0217 15:48:02.735418 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h" (OuterVolumeSpecName: "kube-api-access-sfq7h") pod "ca551755-0560-44aa-b5f9-3e9bfc9984af" (UID: "ca551755-0560-44aa-b5f9-3e9bfc9984af"). InnerVolumeSpecName "kube-api-access-sfq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:02.822121 master-0 kubenswrapper[26425]: I0217 15:48:02.822035 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfq7h\" (UniqueName: \"kubernetes.io/projected/ca551755-0560-44aa-b5f9-3e9bfc9984af-kube-api-access-sfq7h\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:03.026485 master-0 kubenswrapper[26425]: I0217 15:48:03.026295 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-566cf67fc4-2bm2p"] Feb 17 15:48:03.027004 master-0 kubenswrapper[26425]: E0217 15:48:03.026962 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca551755-0560-44aa-b5f9-3e9bfc9984af" containerName="mariadb-account-create-update" Feb 17 15:48:03.027004 master-0 kubenswrapper[26425]: I0217 15:48:03.026995 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca551755-0560-44aa-b5f9-3e9bfc9984af" containerName="mariadb-account-create-update" Feb 17 15:48:03.027101 master-0 kubenswrapper[26425]: E0217 15:48:03.027051 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" containerName="mariadb-database-create" Feb 17 15:48:03.027101 master-0 kubenswrapper[26425]: I0217 15:48:03.027062 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" containerName="mariadb-database-create" Feb 17 15:48:03.028622 master-0 kubenswrapper[26425]: I0217 15:48:03.027382 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca551755-0560-44aa-b5f9-3e9bfc9984af" containerName="mariadb-account-create-update" Feb 17 15:48:03.028622 master-0 kubenswrapper[26425]: I0217 15:48:03.027452 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" containerName="mariadb-database-create" Feb 17 15:48:03.061546 master-0 kubenswrapper[26425]: I0217 15:48:03.061460 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-566cf67fc4-2bm2p"] Feb 17 15:48:03.062042 master-0 kubenswrapper[26425]: I0217 15:48:03.061601 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.063717 master-0 kubenswrapper[26425]: I0217 15:48:03.063690 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 17 15:48:03.065330 master-0 kubenswrapper[26425]: I0217 15:48:03.064028 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 17 15:48:03.076989 master-0 kubenswrapper[26425]: I0217 15:48:03.076794 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fd74d8d4b-qd7wh" event={"ID":"777f4777-85b4-4189-8149-46fe87965462","Type":"ContainerStarted","Data":"f27df9bd8aa79e991cd2c7ff26c74d51ad0208934ddc1ce3d6ea87d60fde489f"} Feb 17 15:48:03.077764 master-0 kubenswrapper[26425]: I0217 15:48:03.077729 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:03.077764 master-0 kubenswrapper[26425]: I0217 15:48:03.077755 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:03.079795 master-0 kubenswrapper[26425]: I0217 15:48:03.078841 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerStarted","Data":"996efc59d6ffd787ce4ebef2156a06dd188d1c8fbc4bd23a6383211ec5e22dd1"} Feb 17 15:48:03.079795 master-0 kubenswrapper[26425]: I0217 15:48:03.078909 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:48:03.084298 master-0 kubenswrapper[26425]: I0217 15:48:03.084133 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"bf0cdfe8b0ab5fbc6be46363eea1769b7e72da259d63330aa726a57ccdf885f8"} Feb 17 15:48:03.085670 master-0 kubenswrapper[26425]: I0217 15:48:03.085623 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" Feb 17 15:48:03.086125 master-0 kubenswrapper[26425]: I0217 15:48:03.085917 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-016b-account-create-update-v8zdc" event={"ID":"ca551755-0560-44aa-b5f9-3e9bfc9984af","Type":"ContainerDied","Data":"4bbcf0c69284bef768b3a8f66af41bec88edfb7f77fb63bf9e422c435357e1fd"} Feb 17 15:48:03.086125 master-0 kubenswrapper[26425]: I0217 15:48:03.085967 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbcf0c69284bef768b3a8f66af41bec88edfb7f77fb63bf9e422c435357e1fd" Feb 17 15:48:03.129100 master-0 kubenswrapper[26425]: I0217 15:48:03.128843 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-logs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129318 master-0 kubenswrapper[26425]: I0217 15:48:03.129127 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-scripts\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129318 master-0 kubenswrapper[26425]: I0217 15:48:03.129196 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-public-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129488 master-0 kubenswrapper[26425]: I0217 15:48:03.129422 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129612 master-0 kubenswrapper[26425]: I0217 15:48:03.129591 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-internal-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129670 master-0 kubenswrapper[26425]: I0217 15:48:03.129656 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-combined-ca-bundle\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129876 master-0 kubenswrapper[26425]: I0217 15:48:03.129854 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-config-data-merged\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.129946 master-0 kubenswrapper[26425]: I0217 15:48:03.129886 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/71300086-e5db-405b-843e-efa8c3a683c3-etc-podinfo\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.130083 master-0 kubenswrapper[26425]: I0217 15:48:03.130030 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data-custom\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.130238 master-0 kubenswrapper[26425]: I0217 15:48:03.130211 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntm89\" (UniqueName: \"kubernetes.io/projected/71300086-e5db-405b-843e-efa8c3a683c3-kube-api-access-ntm89\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.143092 master-0 kubenswrapper[26425]: I0217 15:48:03.143006 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" podStartSLOduration=4.149690465 podStartE2EDuration="7.14298245s" podCreationTimestamp="2026-02-17 15:47:56 +0000 UTC" firstStartedPulling="2026-02-17 15:47:58.630514443 +0000 UTC m=+1940.522238261" lastFinishedPulling="2026-02-17 15:48:01.623806428 +0000 UTC m=+1943.515530246" observedRunningTime="2026-02-17 15:48:03.132262962 +0000 UTC m=+1945.023986800" watchObservedRunningTime="2026-02-17 15:48:03.14298245 +0000 UTC m=+1945.034706278" Feb 17 15:48:03.167130 master-0 kubenswrapper[26425]: I0217 15:48:03.167006 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5fd74d8d4b-qd7wh" podStartSLOduration=4.166970485 podStartE2EDuration="4.166970485s" podCreationTimestamp="2026-02-17 15:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:03.158995073 +0000 UTC m=+1945.050718911" watchObservedRunningTime="2026-02-17 15:48:03.166970485 +0000 UTC m=+1945.058694313" Feb 17 15:48:03.233054 master-0 kubenswrapper[26425]: I0217 15:48:03.232991 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-config-data-merged\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233054 master-0 kubenswrapper[26425]: I0217 15:48:03.233057 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/71300086-e5db-405b-843e-efa8c3a683c3-etc-podinfo\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233371 master-0 kubenswrapper[26425]: I0217 15:48:03.233100 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data-custom\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233371 master-0 kubenswrapper[26425]: I0217 15:48:03.233138 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntm89\" (UniqueName: \"kubernetes.io/projected/71300086-e5db-405b-843e-efa8c3a683c3-kube-api-access-ntm89\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233371 master-0 kubenswrapper[26425]: I0217 15:48:03.233248 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-logs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233371 master-0 kubenswrapper[26425]: I0217 15:48:03.233306 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-scripts\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233371 master-0 kubenswrapper[26425]: I0217 15:48:03.233325 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-public-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233643 master-0 kubenswrapper[26425]: I0217 15:48:03.233379 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233643 master-0 kubenswrapper[26425]: I0217 15:48:03.233429 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-internal-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.233643 master-0 kubenswrapper[26425]: I0217 15:48:03.233461 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-combined-ca-bundle\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.234524 master-0 kubenswrapper[26425]: I0217 15:48:03.234498 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-logs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.236733 master-0 kubenswrapper[26425]: I0217 15:48:03.236689 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71300086-e5db-405b-843e-efa8c3a683c3-config-data-merged\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.240643 master-0 kubenswrapper[26425]: I0217 15:48:03.240591 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-internal-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.240842 master-0 kubenswrapper[26425]: I0217 15:48:03.240731 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/71300086-e5db-405b-843e-efa8c3a683c3-etc-podinfo\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.240842 master-0 kubenswrapper[26425]: I0217 15:48:03.240727 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-combined-ca-bundle\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.240961 master-0 kubenswrapper[26425]: I0217 15:48:03.240910 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-public-tls-certs\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.242265 master-0 kubenswrapper[26425]: I0217 15:48:03.242214 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data-custom\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.243001 master-0 kubenswrapper[26425]: I0217 15:48:03.242954 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-scripts\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.247912 master-0 kubenswrapper[26425]: I0217 15:48:03.247871 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71300086-e5db-405b-843e-efa8c3a683c3-config-data\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.250095 master-0 kubenswrapper[26425]: I0217 15:48:03.250058 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntm89\" (UniqueName: \"kubernetes.io/projected/71300086-e5db-405b-843e-efa8c3a683c3-kube-api-access-ntm89\") pod \"ironic-566cf67fc4-2bm2p\" (UID: \"71300086-e5db-405b-843e-efa8c3a683c3\") " pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:03.378346 master-0 kubenswrapper[26425]: I0217 15:48:03.378277 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:04.115506 master-0 kubenswrapper[26425]: I0217 15:48:04.115434 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-566cf67fc4-2bm2p"] Feb 17 15:48:04.118559 master-0 kubenswrapper[26425]: W0217 15:48:04.118508 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71300086_e5db_405b_843e_efa8c3a683c3.slice/crio-c638abf44f683f0c10b2c77f0471acb1e95f793ac4cec128b88be099ce0a03c2 WatchSource:0}: Error finding container c638abf44f683f0c10b2c77f0471acb1e95f793ac4cec128b88be099ce0a03c2: Status 404 returned error can't find the container with id c638abf44f683f0c10b2c77f0471acb1e95f793ac4cec128b88be099ce0a03c2 Feb 17 15:48:05.131951 master-0 kubenswrapper[26425]: I0217 15:48:05.131890 26425 generic.go:334] "Generic (PLEG): container finished" podID="ea8f52d0-e4bb-4457-b7f7-33133e152096" containerID="996efc59d6ffd787ce4ebef2156a06dd188d1c8fbc4bd23a6383211ec5e22dd1" exitCode=1 Feb 17 15:48:05.132493 master-0 kubenswrapper[26425]: I0217 15:48:05.131971 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerDied","Data":"996efc59d6ffd787ce4ebef2156a06dd188d1c8fbc4bd23a6383211ec5e22dd1"} Feb 17 15:48:05.132886 master-0 kubenswrapper[26425]: I0217 15:48:05.132850 26425 scope.go:117] "RemoveContainer" containerID="996efc59d6ffd787ce4ebef2156a06dd188d1c8fbc4bd23a6383211ec5e22dd1" Feb 17 15:48:05.137631 master-0 kubenswrapper[26425]: I0217 15:48:05.137548 26425 generic.go:334] "Generic (PLEG): container finished" podID="71300086-e5db-405b-843e-efa8c3a683c3" containerID="19a282fc62b88209e37f851ce6f304463b39bac7595989a755c68e80c0f57670" exitCode=0 Feb 17 15:48:05.137728 master-0 kubenswrapper[26425]: I0217 15:48:05.137628 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-566cf67fc4-2bm2p" event={"ID":"71300086-e5db-405b-843e-efa8c3a683c3","Type":"ContainerDied","Data":"19a282fc62b88209e37f851ce6f304463b39bac7595989a755c68e80c0f57670"} Feb 17 15:48:05.143057 master-0 kubenswrapper[26425]: I0217 15:48:05.139820 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-566cf67fc4-2bm2p" event={"ID":"71300086-e5db-405b-843e-efa8c3a683c3","Type":"ContainerStarted","Data":"c638abf44f683f0c10b2c77f0471acb1e95f793ac4cec128b88be099ce0a03c2"} Feb 17 15:48:05.143057 master-0 kubenswrapper[26425]: I0217 15:48:05.142775 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerDied","Data":"4c20b6c1017ab876c4e4a8b41e062dbe5293726a769d62cf572dadfd3affcf0f"} Feb 17 15:48:05.143057 master-0 kubenswrapper[26425]: I0217 15:48:05.142054 26425 generic.go:334] "Generic (PLEG): container finished" podID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerID="4c20b6c1017ab876c4e4a8b41e062dbe5293726a769d62cf572dadfd3affcf0f" exitCode=0 Feb 17 15:48:06.161077 master-0 kubenswrapper[26425]: I0217 15:48:06.160996 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerStarted","Data":"0396748d4df875e0f697cca6a897184bcc8b39f11b50271bc1e42c1f51028a8e"} Feb 17 15:48:06.161077 master-0 kubenswrapper[26425]: I0217 15:48:06.161079 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerStarted","Data":"21ecf47f914bc264faf6374c0ec154366e55fdb2e9f8acba635883f86fe928ef"} Feb 17 15:48:06.163518 master-0 kubenswrapper[26425]: I0217 15:48:06.163263 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:48:06.173701 master-0 kubenswrapper[26425]: I0217 15:48:06.169877 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerStarted","Data":"7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af"} Feb 17 15:48:06.173701 master-0 kubenswrapper[26425]: I0217 15:48:06.170142 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:48:06.173701 master-0 kubenswrapper[26425]: I0217 15:48:06.172942 26425 generic.go:334] "Generic (PLEG): container finished" podID="1c26c340-473b-49c9-a62f-1915fac7b655" containerID="bf0cdfe8b0ab5fbc6be46363eea1769b7e72da259d63330aa726a57ccdf885f8" exitCode=0 Feb 17 15:48:06.173701 master-0 kubenswrapper[26425]: I0217 15:48:06.173103 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerDied","Data":"bf0cdfe8b0ab5fbc6be46363eea1769b7e72da259d63330aa726a57ccdf885f8"} Feb 17 15:48:06.180691 master-0 kubenswrapper[26425]: I0217 15:48:06.180200 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-566cf67fc4-2bm2p" event={"ID":"71300086-e5db-405b-843e-efa8c3a683c3","Type":"ContainerStarted","Data":"f40b0c77720595bd743f975a51500d66d24a338377b43c7cc83f7ea8ddd48099"} Feb 17 15:48:06.180691 master-0 kubenswrapper[26425]: I0217 15:48:06.180281 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-566cf67fc4-2bm2p" event={"ID":"71300086-e5db-405b-843e-efa8c3a683c3","Type":"ContainerStarted","Data":"70913d55eb5bc2f47d466d296e71cf929672dee2351c1f0ac18c305d48ab167f"} Feb 17 15:48:06.180847 master-0 kubenswrapper[26425]: I0217 15:48:06.180701 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:06.198235 master-0 kubenswrapper[26425]: I0217 15:48:06.196531 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-7b6b8d45d-l4pv4" podStartSLOduration=5.074896275 podStartE2EDuration="9.196502558s" podCreationTimestamp="2026-02-17 15:47:57 +0000 UTC" firstStartedPulling="2026-02-17 15:47:59.84632308 +0000 UTC m=+1941.738046898" lastFinishedPulling="2026-02-17 15:48:03.967929353 +0000 UTC m=+1945.859653181" observedRunningTime="2026-02-17 15:48:06.186286852 +0000 UTC m=+1948.078010700" watchObservedRunningTime="2026-02-17 15:48:06.196502558 +0000 UTC m=+1948.088226416" Feb 17 15:48:06.234603 master-0 kubenswrapper[26425]: I0217 15:48:06.234374 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-566cf67fc4-2bm2p" podStartSLOduration=4.234346665 podStartE2EDuration="4.234346665s" podCreationTimestamp="2026-02-17 15:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:06.227824449 +0000 UTC m=+1948.119548267" watchObservedRunningTime="2026-02-17 15:48:06.234346665 +0000 UTC m=+1948.126070483" Feb 17 15:48:07.173673 master-0 kubenswrapper[26425]: I0217 15:48:07.171980 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f77fccc4f-8svgt" Feb 17 15:48:07.212286 master-0 kubenswrapper[26425]: I0217 15:48:07.212107 26425 generic.go:334] "Generic (PLEG): container finished" podID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerID="0396748d4df875e0f697cca6a897184bcc8b39f11b50271bc1e42c1f51028a8e" exitCode=1 Feb 17 15:48:07.212993 master-0 kubenswrapper[26425]: I0217 15:48:07.212872 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerDied","Data":"0396748d4df875e0f697cca6a897184bcc8b39f11b50271bc1e42c1f51028a8e"} Feb 17 15:48:07.214572 master-0 kubenswrapper[26425]: I0217 15:48:07.214512 26425 scope.go:117] "RemoveContainer" containerID="0396748d4df875e0f697cca6a897184bcc8b39f11b50271bc1e42c1f51028a8e" Feb 17 15:48:07.276681 master-0 kubenswrapper[26425]: I0217 15:48:07.272727 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:48:07.380398 master-0 kubenswrapper[26425]: I0217 15:48:07.380295 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:48:07.380643 master-0 kubenswrapper[26425]: I0217 15:48:07.380592 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="dnsmasq-dns" containerID="cri-o://039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c" gracePeriod=10 Feb 17 15:48:07.600864 master-0 kubenswrapper[26425]: I0217 15:48:07.600806 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:48:08.044578 master-0 kubenswrapper[26425]: I0217 15:48:08.044510 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:48:08.157121 master-0 kubenswrapper[26425]: I0217 15:48:08.157048 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.157121 master-0 kubenswrapper[26425]: I0217 15:48:08.157108 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn5tz\" (UniqueName: \"kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.157494 master-0 kubenswrapper[26425]: I0217 15:48:08.157228 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.157494 master-0 kubenswrapper[26425]: I0217 15:48:08.157314 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.157494 master-0 kubenswrapper[26425]: I0217 15:48:08.157338 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.157494 master-0 kubenswrapper[26425]: I0217 15:48:08.157390 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc\") pod \"93130d0e-e444-4ec9-b294-aa8240b342ee\" (UID: \"93130d0e-e444-4ec9-b294-aa8240b342ee\") " Feb 17 15:48:08.168792 master-0 kubenswrapper[26425]: I0217 15:48:08.168716 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz" (OuterVolumeSpecName: "kube-api-access-nn5tz") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "kube-api-access-nn5tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:08.266881 master-0 kubenswrapper[26425]: I0217 15:48:08.266734 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn5tz\" (UniqueName: \"kubernetes.io/projected/93130d0e-e444-4ec9-b294-aa8240b342ee-kube-api-access-nn5tz\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.311165 master-0 kubenswrapper[26425]: I0217 15:48:08.311099 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:08.323502 master-0 kubenswrapper[26425]: I0217 15:48:08.315636 26425 generic.go:334] "Generic (PLEG): container finished" podID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerID="039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c" exitCode=0 Feb 17 15:48:08.323502 master-0 kubenswrapper[26425]: I0217 15:48:08.315708 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" event={"ID":"93130d0e-e444-4ec9-b294-aa8240b342ee","Type":"ContainerDied","Data":"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c"} Feb 17 15:48:08.323502 master-0 kubenswrapper[26425]: I0217 15:48:08.315737 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" event={"ID":"93130d0e-e444-4ec9-b294-aa8240b342ee","Type":"ContainerDied","Data":"f633d5ecac7be141e2e640199ac58ac2c385c868624d61c3201219bb9d249c26"} Feb 17 15:48:08.323502 master-0 kubenswrapper[26425]: I0217 15:48:08.315757 26425 scope.go:117] "RemoveContainer" containerID="039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c" Feb 17 15:48:08.323502 master-0 kubenswrapper[26425]: I0217 15:48:08.315877 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-7r7fm" Feb 17 15:48:08.339844 master-0 kubenswrapper[26425]: I0217 15:48:08.336062 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:08.343490 master-0 kubenswrapper[26425]: I0217 15:48:08.343053 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:08.382270 master-0 kubenswrapper[26425]: I0217 15:48:08.378572 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.382270 master-0 kubenswrapper[26425]: I0217 15:48:08.378631 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.382270 master-0 kubenswrapper[26425]: I0217 15:48:08.378645 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.382270 master-0 kubenswrapper[26425]: I0217 15:48:08.382118 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config" (OuterVolumeSpecName: "config") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:08.398272 master-0 kubenswrapper[26425]: I0217 15:48:08.398216 26425 generic.go:334] "Generic (PLEG): container finished" podID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerID="1719ce12b8093f8791c1d9cf3f5e79f0729bf3fdec878a4f1c98984e81804f48" exitCode=1 Feb 17 15:48:08.413271 master-0 kubenswrapper[26425]: I0217 15:48:08.409560 26425 scope.go:117] "RemoveContainer" containerID="1719ce12b8093f8791c1d9cf3f5e79f0729bf3fdec878a4f1c98984e81804f48" Feb 17 15:48:08.413271 master-0 kubenswrapper[26425]: E0217 15:48:08.409871 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7b6b8d45d-l4pv4_openstack(5fe78f22-b268-44d3-8be8-d305135ed9ca)\"" pod="openstack/ironic-7b6b8d45d-l4pv4" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" Feb 17 15:48:08.421956 master-0 kubenswrapper[26425]: I0217 15:48:08.418284 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93130d0e-e444-4ec9-b294-aa8240b342ee" (UID: "93130d0e-e444-4ec9-b294-aa8240b342ee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:08.480840 master-0 kubenswrapper[26425]: I0217 15:48:08.480773 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.480840 master-0 kubenswrapper[26425]: I0217 15:48:08.480825 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93130d0e-e444-4ec9-b294-aa8240b342ee-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:08.500665 master-0 kubenswrapper[26425]: I0217 15:48:08.500437 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerDied","Data":"1719ce12b8093f8791c1d9cf3f5e79f0729bf3fdec878a4f1c98984e81804f48"} Feb 17 15:48:08.512699 master-0 kubenswrapper[26425]: I0217 15:48:08.512648 26425 scope.go:117] "RemoveContainer" containerID="072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8" Feb 17 15:48:08.537433 master-0 kubenswrapper[26425]: I0217 15:48:08.537376 26425 scope.go:117] "RemoveContainer" containerID="039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c" Feb 17 15:48:08.538161 master-0 kubenswrapper[26425]: E0217 15:48:08.538111 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c\": container with ID starting with 039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c not found: ID does not exist" containerID="039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c" Feb 17 15:48:08.538237 master-0 kubenswrapper[26425]: I0217 15:48:08.538205 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c"} err="failed to get container status \"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c\": rpc error: code = NotFound desc = could not find container \"039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c\": container with ID starting with 039fd7af897a917c528d83ddd8e32408542dc8df7cbac2278c8c9c283e12079c not found: ID does not exist" Feb 17 15:48:08.538237 master-0 kubenswrapper[26425]: I0217 15:48:08.538233 26425 scope.go:117] "RemoveContainer" containerID="072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8" Feb 17 15:48:08.538840 master-0 kubenswrapper[26425]: E0217 15:48:08.538803 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8\": container with ID starting with 072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8 not found: ID does not exist" containerID="072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8" Feb 17 15:48:08.538840 master-0 kubenswrapper[26425]: I0217 15:48:08.538833 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8"} err="failed to get container status \"072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8\": rpc error: code = NotFound desc = could not find container \"072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8\": container with ID starting with 072084d6ecc1d08911ee719b7458ebcc2cf4d9a04ebf48cc0b25ed09f86e9be8 not found: ID does not exist" Feb 17 15:48:08.538953 master-0 kubenswrapper[26425]: I0217 15:48:08.538849 26425 scope.go:117] "RemoveContainer" containerID="0396748d4df875e0f697cca6a897184bcc8b39f11b50271bc1e42c1f51028a8e" Feb 17 15:48:08.713168 master-0 kubenswrapper[26425]: I0217 15:48:08.713103 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:48:08.728581 master-0 kubenswrapper[26425]: I0217 15:48:08.728497 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-7r7fm"] Feb 17 15:48:08.769251 master-0 kubenswrapper[26425]: I0217 15:48:08.769190 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 15:48:08.770387 master-0 kubenswrapper[26425]: E0217 15:48:08.769890 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="dnsmasq-dns" Feb 17 15:48:08.770387 master-0 kubenswrapper[26425]: I0217 15:48:08.769927 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="dnsmasq-dns" Feb 17 15:48:08.770387 master-0 kubenswrapper[26425]: E0217 15:48:08.769964 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="init" Feb 17 15:48:08.770387 master-0 kubenswrapper[26425]: I0217 15:48:08.769974 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="init" Feb 17 15:48:08.770387 master-0 kubenswrapper[26425]: I0217 15:48:08.770279 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" containerName="dnsmasq-dns" Feb 17 15:48:08.771329 master-0 kubenswrapper[26425]: I0217 15:48:08.771303 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 15:48:08.777736 master-0 kubenswrapper[26425]: I0217 15:48:08.776837 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 15:48:08.777736 master-0 kubenswrapper[26425]: I0217 15:48:08.776943 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 15:48:08.784383 master-0 kubenswrapper[26425]: I0217 15:48:08.784340 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 15:48:08.799345 master-0 kubenswrapper[26425]: I0217 15:48:08.799237 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.799527 master-0 kubenswrapper[26425]: I0217 15:48:08.799370 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.799625 master-0 kubenswrapper[26425]: I0217 15:48:08.799563 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p2b\" (UniqueName: \"kubernetes.io/projected/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-kube-api-access-t2p2b\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.799695 master-0 kubenswrapper[26425]: I0217 15:48:08.799658 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.901970 master-0 kubenswrapper[26425]: I0217 15:48:08.901871 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p2b\" (UniqueName: \"kubernetes.io/projected/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-kube-api-access-t2p2b\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.902228 master-0 kubenswrapper[26425]: I0217 15:48:08.902013 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.902228 master-0 kubenswrapper[26425]: I0217 15:48:08.902113 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.902228 master-0 kubenswrapper[26425]: I0217 15:48:08.902215 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.903356 master-0 kubenswrapper[26425]: I0217 15:48:08.903331 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.905864 master-0 kubenswrapper[26425]: I0217 15:48:08.905809 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.906566 master-0 kubenswrapper[26425]: I0217 15:48:08.906534 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:08.921730 master-0 kubenswrapper[26425]: I0217 15:48:08.921676 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p2b\" (UniqueName: \"kubernetes.io/projected/ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0-kube-api-access-t2p2b\") pod \"openstackclient\" (UID: \"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0\") " pod="openstack/openstackclient" Feb 17 15:48:09.113445 master-0 kubenswrapper[26425]: I0217 15:48:09.113391 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 15:48:09.431185 master-0 kubenswrapper[26425]: I0217 15:48:09.431086 26425 scope.go:117] "RemoveContainer" containerID="1719ce12b8093f8791c1d9cf3f5e79f0729bf3fdec878a4f1c98984e81804f48" Feb 17 15:48:09.432213 master-0 kubenswrapper[26425]: E0217 15:48:09.431519 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7b6b8d45d-l4pv4_openstack(5fe78f22-b268-44d3-8be8-d305135ed9ca)\"" pod="openstack/ironic-7b6b8d45d-l4pv4" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" Feb 17 15:48:09.437060 master-0 kubenswrapper[26425]: I0217 15:48:09.437012 26425 generic.go:334] "Generic (PLEG): container finished" podID="ea8f52d0-e4bb-4457-b7f7-33133e152096" containerID="7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af" exitCode=1 Feb 17 15:48:09.437130 master-0 kubenswrapper[26425]: I0217 15:48:09.437068 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerDied","Data":"7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af"} Feb 17 15:48:09.437130 master-0 kubenswrapper[26425]: I0217 15:48:09.437110 26425 scope.go:117] "RemoveContainer" containerID="996efc59d6ffd787ce4ebef2156a06dd188d1c8fbc4bd23a6383211ec5e22dd1" Feb 17 15:48:09.438000 master-0 kubenswrapper[26425]: I0217 15:48:09.437978 26425 scope.go:117] "RemoveContainer" containerID="7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af" Feb 17 15:48:09.438326 master-0 kubenswrapper[26425]: E0217 15:48:09.438298 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-88dd96889-vwkh6_openstack(ea8f52d0-e4bb-4457-b7f7-33133e152096)\"" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" podUID="ea8f52d0-e4bb-4457-b7f7-33133e152096" Feb 17 15:48:10.095875 master-0 kubenswrapper[26425]: I0217 15:48:10.095807 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-566cf67fc4-2bm2p" Feb 17 15:48:10.411283 master-0 kubenswrapper[26425]: I0217 15:48:10.411224 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93130d0e-e444-4ec9-b294-aa8240b342ee" path="/var/lib/kubelet/pods/93130d0e-e444-4ec9-b294-aa8240b342ee/volumes" Feb 17 15:48:10.735896 master-0 kubenswrapper[26425]: I0217 15:48:10.735492 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:48:10.735896 master-0 kubenswrapper[26425]: I0217 15:48:10.735706 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-7b6b8d45d-l4pv4" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api-log" containerID="cri-o://21ecf47f914bc264faf6374c0ec154366e55fdb2e9f8acba635883f86fe928ef" gracePeriod=60 Feb 17 15:48:11.317124 master-0 kubenswrapper[26425]: I0217 15:48:11.315062 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 15:48:11.336251 master-0 kubenswrapper[26425]: I0217 15:48:11.334014 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-x86bq"] Feb 17 15:48:11.338191 master-0 kubenswrapper[26425]: I0217 15:48:11.338145 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.341377 master-0 kubenswrapper[26425]: I0217 15:48:11.341337 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 17 15:48:11.341506 master-0 kubenswrapper[26425]: I0217 15:48:11.341441 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 17 15:48:11.375029 master-0 kubenswrapper[26425]: I0217 15:48:11.373110 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-x86bq"] Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391280 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391422 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391489 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391546 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391668 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391689 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.392156 master-0 kubenswrapper[26425]: I0217 15:48:11.391749 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjrx\" (UniqueName: \"kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.467436 master-0 kubenswrapper[26425]: I0217 15:48:11.467366 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0","Type":"ContainerStarted","Data":"2ea82c96168e252692ae9f33d2161cbda66eaa6d0d07d37e45fa51c25664228d"} Feb 17 15:48:11.471994 master-0 kubenswrapper[26425]: I0217 15:48:11.471759 26425 generic.go:334] "Generic (PLEG): container finished" podID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerID="21ecf47f914bc264faf6374c0ec154366e55fdb2e9f8acba635883f86fe928ef" exitCode=143 Feb 17 15:48:11.471994 master-0 kubenswrapper[26425]: I0217 15:48:11.471792 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerDied","Data":"21ecf47f914bc264faf6374c0ec154366e55fdb2e9f8acba635883f86fe928ef"} Feb 17 15:48:11.499787 master-0 kubenswrapper[26425]: I0217 15:48:11.499668 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.499979 master-0 kubenswrapper[26425]: I0217 15:48:11.499830 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.499979 master-0 kubenswrapper[26425]: I0217 15:48:11.499913 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.499979 master-0 kubenswrapper[26425]: I0217 15:48:11.499958 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.500156 master-0 kubenswrapper[26425]: I0217 15:48:11.500118 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.500207 master-0 kubenswrapper[26425]: I0217 15:48:11.500158 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.500207 master-0 kubenswrapper[26425]: I0217 15:48:11.500188 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjrx\" (UniqueName: \"kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.501932 master-0 kubenswrapper[26425]: I0217 15:48:11.501896 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.503254 master-0 kubenswrapper[26425]: I0217 15:48:11.503111 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.506661 master-0 kubenswrapper[26425]: I0217 15:48:11.506629 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.516293 master-0 kubenswrapper[26425]: I0217 15:48:11.516207 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.518361 master-0 kubenswrapper[26425]: I0217 15:48:11.517507 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.518361 master-0 kubenswrapper[26425]: I0217 15:48:11.518309 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.527504 master-0 kubenswrapper[26425]: I0217 15:48:11.527444 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjrx\" (UniqueName: \"kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx\") pod \"ironic-inspector-db-sync-x86bq\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.687548 master-0 kubenswrapper[26425]: I0217 15:48:11.687275 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:11.783254 master-0 kubenswrapper[26425]: I0217 15:48:11.783120 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:48:11.817019 master-0 kubenswrapper[26425]: I0217 15:48:11.816935 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.817179 master-0 kubenswrapper[26425]: I0217 15:48:11.817065 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.822684 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.822777 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.822868 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.822903 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.823055 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.823049 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.823121 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r8c7\" (UniqueName: \"kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7\") pod \"5fe78f22-b268-44d3-8be8-d305135ed9ca\" (UID: \"5fe78f22-b268-44d3-8be8-d305135ed9ca\") " Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.823327 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs" (OuterVolumeSpecName: "logs") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.824023 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.824047 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.827848 master-0 kubenswrapper[26425]: I0217 15:48:11.827291 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:11.828985 master-0 kubenswrapper[26425]: I0217 15:48:11.828940 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts" (OuterVolumeSpecName: "scripts") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:11.833959 master-0 kubenswrapper[26425]: I0217 15:48:11.833804 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7" (OuterVolumeSpecName: "kube-api-access-9r8c7") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "kube-api-access-9r8c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:11.834396 master-0 kubenswrapper[26425]: I0217 15:48:11.834340 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 15:48:11.880157 master-0 kubenswrapper[26425]: I0217 15:48:11.879894 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data" (OuterVolumeSpecName: "config-data") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:11.928760 master-0 kubenswrapper[26425]: I0217 15:48:11.926830 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r8c7\" (UniqueName: \"kubernetes.io/projected/5fe78f22-b268-44d3-8be8-d305135ed9ca-kube-api-access-9r8c7\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.928760 master-0 kubenswrapper[26425]: I0217 15:48:11.926885 26425 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5fe78f22-b268-44d3-8be8-d305135ed9ca-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.928760 master-0 kubenswrapper[26425]: I0217 15:48:11.926898 26425 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.928760 master-0 kubenswrapper[26425]: I0217 15:48:11.926907 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.928760 master-0 kubenswrapper[26425]: I0217 15:48:11.926916 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:11.946289 master-0 kubenswrapper[26425]: I0217 15:48:11.946227 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fe78f22-b268-44d3-8be8-d305135ed9ca" (UID: "5fe78f22-b268-44d3-8be8-d305135ed9ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:12.029824 master-0 kubenswrapper[26425]: I0217 15:48:12.029665 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe78f22-b268-44d3-8be8-d305135ed9ca-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:12.128323 master-0 kubenswrapper[26425]: I0217 15:48:12.128258 26425 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:48:12.129202 master-0 kubenswrapper[26425]: I0217 15:48:12.129170 26425 scope.go:117] "RemoveContainer" containerID="7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af" Feb 17 15:48:12.129601 master-0 kubenswrapper[26425]: E0217 15:48:12.129563 26425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-88dd96889-vwkh6_openstack(ea8f52d0-e4bb-4457-b7f7-33133e152096)\"" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" podUID="ea8f52d0-e4bb-4457-b7f7-33133e152096" Feb 17 15:48:12.237808 master-0 kubenswrapper[26425]: I0217 15:48:12.237612 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-x86bq"] Feb 17 15:48:12.494266 master-0 kubenswrapper[26425]: I0217 15:48:12.494109 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7b6b8d45d-l4pv4" event={"ID":"5fe78f22-b268-44d3-8be8-d305135ed9ca","Type":"ContainerDied","Data":"8902f6502f4e47c5b2266a6cdead4e5cf322d0e34d043d65a4d827c26d38a316"} Feb 17 15:48:12.494266 master-0 kubenswrapper[26425]: I0217 15:48:12.494156 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7b6b8d45d-l4pv4" Feb 17 15:48:12.494266 master-0 kubenswrapper[26425]: I0217 15:48:12.494176 26425 scope.go:117] "RemoveContainer" containerID="1719ce12b8093f8791c1d9cf3f5e79f0729bf3fdec878a4f1c98984e81804f48" Feb 17 15:48:12.496753 master-0 kubenswrapper[26425]: I0217 15:48:12.496707 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-x86bq" event={"ID":"0704cefc-181d-40ab-ba9c-a204b5f85727","Type":"ContainerStarted","Data":"bbf3236939b662b78ad80297b4b957484d81e9f6d534717ffa79c9b9eb94cf56"} Feb 17 15:48:12.532879 master-0 kubenswrapper[26425]: I0217 15:48:12.530370 26425 scope.go:117] "RemoveContainer" containerID="21ecf47f914bc264faf6374c0ec154366e55fdb2e9f8acba635883f86fe928ef" Feb 17 15:48:12.555045 master-0 kubenswrapper[26425]: I0217 15:48:12.554983 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:48:12.560278 master-0 kubenswrapper[26425]: I0217 15:48:12.560238 26425 scope.go:117] "RemoveContainer" containerID="4c20b6c1017ab876c4e4a8b41e062dbe5293726a769d62cf572dadfd3affcf0f" Feb 17 15:48:12.580533 master-0 kubenswrapper[26425]: I0217 15:48:12.579863 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-7b6b8d45d-l4pv4"] Feb 17 15:48:14.321253 master-0 kubenswrapper[26425]: I0217 15:48:14.321187 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-4lmzn"] Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: E0217 15:48:14.321681 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: I0217 15:48:14.321694 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: E0217 15:48:14.321705 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="init" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: I0217 15:48:14.321711 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="init" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: E0217 15:48:14.321753 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: I0217 15:48:14.321760 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: E0217 15:48:14.321776 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api-log" Feb 17 15:48:14.321921 master-0 kubenswrapper[26425]: I0217 15:48:14.321782 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api-log" Feb 17 15:48:14.322290 master-0 kubenswrapper[26425]: I0217 15:48:14.322040 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api-log" Feb 17 15:48:14.322290 master-0 kubenswrapper[26425]: I0217 15:48:14.322067 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.322290 master-0 kubenswrapper[26425]: I0217 15:48:14.322086 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" containerName="ironic-api" Feb 17 15:48:14.322889 master-0 kubenswrapper[26425]: I0217 15:48:14.322867 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.383979 master-0 kubenswrapper[26425]: I0217 15:48:14.383875 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4lmzn"] Feb 17 15:48:14.414388 master-0 kubenswrapper[26425]: I0217 15:48:14.414286 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfw86\" (UniqueName: \"kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.414917 master-0 kubenswrapper[26425]: I0217 15:48:14.414861 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.415554 master-0 kubenswrapper[26425]: I0217 15:48:14.415497 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe78f22-b268-44d3-8be8-d305135ed9ca" path="/var/lib/kubelet/pods/5fe78f22-b268-44d3-8be8-d305135ed9ca/volumes" Feb 17 15:48:14.510599 master-0 kubenswrapper[26425]: I0217 15:48:14.510539 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pbs2f"] Feb 17 15:48:14.512514 master-0 kubenswrapper[26425]: I0217 15:48:14.512020 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.517284 master-0 kubenswrapper[26425]: I0217 15:48:14.517223 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfw86\" (UniqueName: \"kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.517481 master-0 kubenswrapper[26425]: I0217 15:48:14.517438 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.518271 master-0 kubenswrapper[26425]: I0217 15:48:14.518237 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.550956 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67bfcfbcf8-m9tkq"] Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.557582 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.563158 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.563398 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.563571 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 15:48:14.568944 master-0 kubenswrapper[26425]: I0217 15:48:14.566321 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pbs2f"] Feb 17 15:48:14.574643 master-0 kubenswrapper[26425]: I0217 15:48:14.573375 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfw86\" (UniqueName: \"kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86\") pod \"nova-api-db-create-4lmzn\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.624372 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.624508 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-combined-ca-bundle\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.624627 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-etc-swift\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.624667 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5fl\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-kube-api-access-5m5fl\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.626704 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-internal-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.626800 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-public-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.626852 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-log-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.627005 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-config-data\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.627058 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96f67\" (UniqueName: \"kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.628110 master-0 kubenswrapper[26425]: I0217 15:48:14.627303 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-run-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.643034 master-0 kubenswrapper[26425]: I0217 15:48:14.642981 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67bfcfbcf8-m9tkq"] Feb 17 15:48:14.644807 master-0 kubenswrapper[26425]: I0217 15:48:14.644574 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:14.687768 master-0 kubenswrapper[26425]: I0217 15:48:14.687607 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-87e5-account-create-update-45dj5"] Feb 17 15:48:14.689294 master-0 kubenswrapper[26425]: I0217 15:48:14.689270 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.692494 master-0 kubenswrapper[26425]: I0217 15:48:14.692434 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 15:48:14.722588 master-0 kubenswrapper[26425]: I0217 15:48:14.721937 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-87e5-account-create-update-45dj5"] Feb 17 15:48:14.729991 master-0 kubenswrapper[26425]: I0217 15:48:14.729916 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-internal-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.730107 master-0 kubenswrapper[26425]: I0217 15:48:14.730012 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtpk\" (UniqueName: \"kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.730107 master-0 kubenswrapper[26425]: I0217 15:48:14.730059 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-public-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.730107 master-0 kubenswrapper[26425]: I0217 15:48:14.730098 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-log-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.730391 master-0 kubenswrapper[26425]: I0217 15:48:14.730357 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-config-data\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.730443 master-0 kubenswrapper[26425]: I0217 15:48:14.730410 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96f67\" (UniqueName: \"kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.730521 master-0 kubenswrapper[26425]: I0217 15:48:14.730497 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.730709 master-0 kubenswrapper[26425]: I0217 15:48:14.730604 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-run-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.730709 master-0 kubenswrapper[26425]: I0217 15:48:14.730684 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.730787 master-0 kubenswrapper[26425]: I0217 15:48:14.730727 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-combined-ca-bundle\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.738581 master-0 kubenswrapper[26425]: I0217 15:48:14.733729 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.740344 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-etc-swift\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.740724 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-run-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.741474 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-internal-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.744793 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-etc-swift\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.744950 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/989291d4-860c-47c4-9042-1a99791aafbb-log-httpd\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.745060 master-0 kubenswrapper[26425]: I0217 15:48:14.744960 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5fl\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-kube-api-access-5m5fl\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.750507 master-0 kubenswrapper[26425]: I0217 15:48:14.749764 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-combined-ca-bundle\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.750507 master-0 kubenswrapper[26425]: I0217 15:48:14.749769 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-public-tls-certs\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.752228 master-0 kubenswrapper[26425]: I0217 15:48:14.752009 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989291d4-860c-47c4-9042-1a99791aafbb-config-data\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.763021 master-0 kubenswrapper[26425]: I0217 15:48:14.762971 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96f67\" (UniqueName: \"kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67\") pod \"nova-cell0-db-create-pbs2f\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.799969 master-0 kubenswrapper[26425]: I0217 15:48:14.799922 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5fl\" (UniqueName: \"kubernetes.io/projected/989291d4-860c-47c4-9042-1a99791aafbb-kube-api-access-5m5fl\") pod \"swift-proxy-67bfcfbcf8-m9tkq\" (UID: \"989291d4-860c-47c4-9042-1a99791aafbb\") " pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.809481 master-0 kubenswrapper[26425]: I0217 15:48:14.809399 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-69tfm"] Feb 17 15:48:14.812479 master-0 kubenswrapper[26425]: I0217 15:48:14.812412 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:14.837527 master-0 kubenswrapper[26425]: I0217 15:48:14.837415 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:14.847100 master-0 kubenswrapper[26425]: I0217 15:48:14.847032 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.847379 master-0 kubenswrapper[26425]: I0217 15:48:14.847266 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtpk\" (UniqueName: \"kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.848856 master-0 kubenswrapper[26425]: I0217 15:48:14.848802 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.864189 master-0 kubenswrapper[26425]: I0217 15:48:14.863497 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtpk\" (UniqueName: \"kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk\") pod \"nova-api-87e5-account-create-update-45dj5\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.935830 master-0 kubenswrapper[26425]: I0217 15:48:14.931385 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:14.940253 master-0 kubenswrapper[26425]: I0217 15:48:14.940124 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-69tfm"] Feb 17 15:48:14.950027 master-0 kubenswrapper[26425]: I0217 15:48:14.948666 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:14.950027 master-0 kubenswrapper[26425]: I0217 15:48:14.948754 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gxjg\" (UniqueName: \"kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:14.965698 master-0 kubenswrapper[26425]: I0217 15:48:14.965654 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:14.970741 master-0 kubenswrapper[26425]: I0217 15:48:14.970270 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5cd4-account-create-update-hwzx4"] Feb 17 15:48:14.972213 master-0 kubenswrapper[26425]: I0217 15:48:14.972182 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:14.974481 master-0 kubenswrapper[26425]: I0217 15:48:14.974422 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 15:48:14.996810 master-0 kubenswrapper[26425]: I0217 15:48:14.993282 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cd4-account-create-update-hwzx4"] Feb 17 15:48:15.051475 master-0 kubenswrapper[26425]: I0217 15:48:15.051390 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.051923 master-0 kubenswrapper[26425]: I0217 15:48:15.051804 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckpt\" (UniqueName: \"kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.052160 master-0 kubenswrapper[26425]: I0217 15:48:15.052123 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:15.052407 master-0 kubenswrapper[26425]: I0217 15:48:15.052295 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gxjg\" (UniqueName: \"kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:15.053554 master-0 kubenswrapper[26425]: I0217 15:48:15.053515 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:15.054643 master-0 kubenswrapper[26425]: I0217 15:48:15.054483 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f7f8-account-create-update-2x5s2"] Feb 17 15:48:15.057269 master-0 kubenswrapper[26425]: I0217 15:48:15.057211 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.069839 master-0 kubenswrapper[26425]: I0217 15:48:15.069639 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f7f8-account-create-update-2x5s2"] Feb 17 15:48:15.078125 master-0 kubenswrapper[26425]: I0217 15:48:15.071847 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 15:48:15.094696 master-0 kubenswrapper[26425]: I0217 15:48:15.088287 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gxjg\" (UniqueName: \"kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg\") pod \"nova-cell1-db-create-69tfm\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:15.155088 master-0 kubenswrapper[26425]: I0217 15:48:15.154876 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.155088 master-0 kubenswrapper[26425]: I0217 15:48:15.154973 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.155088 master-0 kubenswrapper[26425]: I0217 15:48:15.155093 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckpt\" (UniqueName: \"kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.155402 master-0 kubenswrapper[26425]: I0217 15:48:15.155136 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28v9\" (UniqueName: \"kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.156570 master-0 kubenswrapper[26425]: I0217 15:48:15.156538 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.173952 master-0 kubenswrapper[26425]: I0217 15:48:15.173829 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckpt\" (UniqueName: \"kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt\") pod \"nova-cell0-5cd4-account-create-update-hwzx4\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.248516 master-0 kubenswrapper[26425]: I0217 15:48:15.245907 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:15.257810 master-0 kubenswrapper[26425]: I0217 15:48:15.257761 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.257935 master-0 kubenswrapper[26425]: I0217 15:48:15.257907 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28v9\" (UniqueName: \"kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.261471 master-0 kubenswrapper[26425]: I0217 15:48:15.261411 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.277077 master-0 kubenswrapper[26425]: I0217 15:48:15.276212 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28v9\" (UniqueName: \"kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9\") pod \"nova-cell1-f7f8-account-create-update-2x5s2\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.312335 master-0 kubenswrapper[26425]: W0217 15:48:15.312155 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fde6099_c168_43b1_acbf_cbbdc3ca2435.slice/crio-7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa WatchSource:0}: Error finding container 7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa: Status 404 returned error can't find the container with id 7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa Feb 17 15:48:15.321335 master-0 kubenswrapper[26425]: I0217 15:48:15.321243 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4lmzn"] Feb 17 15:48:15.329965 master-0 kubenswrapper[26425]: I0217 15:48:15.329855 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:15.416232 master-0 kubenswrapper[26425]: I0217 15:48:15.407877 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:15.572207 master-0 kubenswrapper[26425]: I0217 15:48:15.572074 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4lmzn" event={"ID":"7fde6099-c168-43b1-acbf-cbbdc3ca2435","Type":"ContainerStarted","Data":"7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa"} Feb 17 15:48:15.572961 master-0 kubenswrapper[26425]: I0217 15:48:15.572931 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pbs2f"] Feb 17 15:48:15.744772 master-0 kubenswrapper[26425]: I0217 15:48:15.744708 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67bfcfbcf8-m9tkq"] Feb 17 15:48:15.762174 master-0 kubenswrapper[26425]: I0217 15:48:15.762074 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-87e5-account-create-update-45dj5"] Feb 17 15:48:15.985608 master-0 kubenswrapper[26425]: I0217 15:48:15.985547 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cd4-account-create-update-hwzx4"] Feb 17 15:48:16.002378 master-0 kubenswrapper[26425]: I0217 15:48:16.002236 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-69tfm"] Feb 17 15:48:16.133662 master-0 kubenswrapper[26425]: I0217 15:48:16.133596 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:48:16.280812 master-0 kubenswrapper[26425]: I0217 15:48:16.280755 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f7f8-account-create-update-2x5s2"] Feb 17 15:48:16.591425 master-0 kubenswrapper[26425]: I0217 15:48:16.591184 26425 generic.go:334] "Generic (PLEG): container finished" podID="7fde6099-c168-43b1-acbf-cbbdc3ca2435" containerID="7e5e0391d8c32e08824683c890492f0a9a9dd8718cd8c97a3e0f2c389c1cf0d4" exitCode=0 Feb 17 15:48:16.591425 master-0 kubenswrapper[26425]: I0217 15:48:16.591230 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4lmzn" event={"ID":"7fde6099-c168-43b1-acbf-cbbdc3ca2435","Type":"ContainerDied","Data":"7e5e0391d8c32e08824683c890492f0a9a9dd8718cd8c97a3e0f2c389c1cf0d4"} Feb 17 15:48:17.053567 master-0 kubenswrapper[26425]: W0217 15:48:17.053500 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod989291d4_860c_47c4_9042_1a99791aafbb.slice/crio-6580e16be6dcdb3bba417b3c44015352ae541498929eeb8c8f38248b05ab1ce7 WatchSource:0}: Error finding container 6580e16be6dcdb3bba417b3c44015352ae541498929eeb8c8f38248b05ab1ce7: Status 404 returned error can't find the container with id 6580e16be6dcdb3bba417b3c44015352ae541498929eeb8c8f38248b05ab1ce7 Feb 17 15:48:17.059277 master-0 kubenswrapper[26425]: W0217 15:48:17.059212 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab38c29e_22bf_46dc_ac9c_2efc64fa0c1e.slice/crio-95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c WatchSource:0}: Error finding container 95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c: Status 404 returned error can't find the container with id 95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c Feb 17 15:48:17.059695 master-0 kubenswrapper[26425]: W0217 15:48:17.059663 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44bf429_8fa4_486c_ab29_eea74da59e3d.slice/crio-a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485 WatchSource:0}: Error finding container a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485: Status 404 returned error can't find the container with id a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485 Feb 17 15:48:17.072365 master-0 kubenswrapper[26425]: W0217 15:48:17.068523 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a31ffb7_3788_4095_aa10_a7e5ca6ec7b8.slice/crio-d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883 WatchSource:0}: Error finding container d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883: Status 404 returned error can't find the container with id d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883 Feb 17 15:48:17.072365 master-0 kubenswrapper[26425]: W0217 15:48:17.071578 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0531c200_ea9b_4ed4_8e7a_ef60e88b8447.slice/crio-ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1 WatchSource:0}: Error finding container ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1: Status 404 returned error can't find the container with id ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1 Feb 17 15:48:17.075271 master-0 kubenswrapper[26425]: W0217 15:48:17.075192 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60f15465_91a0_44b7_813b_7b3d36d81bd5.slice/crio-39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7 WatchSource:0}: Error finding container 39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7: Status 404 returned error can't find the container with id 39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7 Feb 17 15:48:17.606469 master-0 kubenswrapper[26425]: I0217 15:48:17.606388 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" event={"ID":"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8","Type":"ContainerStarted","Data":"d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883"} Feb 17 15:48:17.608657 master-0 kubenswrapper[26425]: I0217 15:48:17.607970 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" event={"ID":"60f15465-91a0-44b7-813b-7b3d36d81bd5","Type":"ContainerStarted","Data":"39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7"} Feb 17 15:48:17.609825 master-0 kubenswrapper[26425]: I0217 15:48:17.609527 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-69tfm" event={"ID":"d44bf429-8fa4-486c-ab29-eea74da59e3d","Type":"ContainerStarted","Data":"a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485"} Feb 17 15:48:17.611186 master-0 kubenswrapper[26425]: I0217 15:48:17.610995 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pbs2f" event={"ID":"0531c200-ea9b-4ed4-8e7a-ef60e88b8447","Type":"ContainerStarted","Data":"ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1"} Feb 17 15:48:17.613616 master-0 kubenswrapper[26425]: I0217 15:48:17.613332 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-87e5-account-create-update-45dj5" event={"ID":"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e","Type":"ContainerStarted","Data":"95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c"} Feb 17 15:48:17.615347 master-0 kubenswrapper[26425]: I0217 15:48:17.615280 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" event={"ID":"989291d4-860c-47c4-9042-1a99791aafbb","Type":"ContainerStarted","Data":"6580e16be6dcdb3bba417b3c44015352ae541498929eeb8c8f38248b05ab1ce7"} Feb 17 15:48:18.501025 master-0 kubenswrapper[26425]: I0217 15:48:18.500977 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c6d47966f-zhq5k" Feb 17 15:48:23.197907 master-0 kubenswrapper[26425]: I0217 15:48:23.197833 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:48:23.198679 master-0 kubenswrapper[26425]: E0217 15:48:23.198014 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:48:23.198679 master-0 kubenswrapper[26425]: E0217 15:48:23.198038 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:48:23.198679 master-0 kubenswrapper[26425]: E0217 15:48:23.198097 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:50:25.198076691 +0000 UTC m=+2087.089800529 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:48:24.395967 master-0 kubenswrapper[26425]: I0217 15:48:24.395910 26425 scope.go:117] "RemoveContainer" containerID="7932543853c55a33d0c952251a5538d1bbf1d0b21a3de20d277b4d95d82d53af" Feb 17 15:48:25.727311 master-0 kubenswrapper[26425]: I0217 15:48:25.727255 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4lmzn" event={"ID":"7fde6099-c168-43b1-acbf-cbbdc3ca2435","Type":"ContainerDied","Data":"7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa"} Feb 17 15:48:25.727311 master-0 kubenswrapper[26425]: I0217 15:48:25.727305 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e89738748d1889c776c2cefb58d300a3a1fc55c46b7f2ac1d1e57eeae0fb3aa" Feb 17 15:48:25.759087 master-0 kubenswrapper[26425]: I0217 15:48:25.759051 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:25.864705 master-0 kubenswrapper[26425]: I0217 15:48:25.864605 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfw86\" (UniqueName: \"kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86\") pod \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " Feb 17 15:48:25.864934 master-0 kubenswrapper[26425]: I0217 15:48:25.864814 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts\") pod \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\" (UID: \"7fde6099-c168-43b1-acbf-cbbdc3ca2435\") " Feb 17 15:48:25.865359 master-0 kubenswrapper[26425]: I0217 15:48:25.865300 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fde6099-c168-43b1-acbf-cbbdc3ca2435" (UID: "7fde6099-c168-43b1-acbf-cbbdc3ca2435"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:25.866164 master-0 kubenswrapper[26425]: I0217 15:48:25.866128 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fde6099-c168-43b1-acbf-cbbdc3ca2435-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:25.869064 master-0 kubenswrapper[26425]: I0217 15:48:25.869011 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86" (OuterVolumeSpecName: "kube-api-access-xfw86") pod "7fde6099-c168-43b1-acbf-cbbdc3ca2435" (UID: "7fde6099-c168-43b1-acbf-cbbdc3ca2435"). InnerVolumeSpecName "kube-api-access-xfw86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:25.969399 master-0 kubenswrapper[26425]: I0217 15:48:25.969313 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfw86\" (UniqueName: \"kubernetes.io/projected/7fde6099-c168-43b1-acbf-cbbdc3ca2435-kube-api-access-xfw86\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:26.739150 master-0 kubenswrapper[26425]: I0217 15:48:26.739015 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4lmzn" Feb 17 15:48:31.571892 master-0 kubenswrapper[26425]: I0217 15:48:31.571841 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:31.804957 master-0 kubenswrapper[26425]: I0217 15:48:31.804805 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-87e5-account-create-update-45dj5" event={"ID":"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e","Type":"ContainerStarted","Data":"e86aa7cd8eb2364bb61f01a987eadcfc1f4f4b956be4744d7e71b5921eb41fca"} Feb 17 15:48:31.868572 master-0 kubenswrapper[26425]: I0217 15:48:31.868507 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5fd74d8d4b-qd7wh" Feb 17 15:48:34.857849 master-0 kubenswrapper[26425]: I0217 15:48:34.857789 26425 generic.go:334] "Generic (PLEG): container finished" podID="ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" containerID="e86aa7cd8eb2364bb61f01a987eadcfc1f4f4b956be4744d7e71b5921eb41fca" exitCode=0 Feb 17 15:48:34.858806 master-0 kubenswrapper[26425]: I0217 15:48:34.857878 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-87e5-account-create-update-45dj5" event={"ID":"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e","Type":"ContainerDied","Data":"e86aa7cd8eb2364bb61f01a987eadcfc1f4f4b956be4744d7e71b5921eb41fca"} Feb 17 15:48:35.789501 master-0 kubenswrapper[26425]: I0217 15:48:35.786827 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:48:35.789501 master-0 kubenswrapper[26425]: I0217 15:48:35.787124 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c5cd8d-bjbtl" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-api" containerID="cri-o://fdb8892881461652575531c8f135056b00711a9f7fe6e90bd40559e27cc55139" gracePeriod=30 Feb 17 15:48:35.789501 master-0 kubenswrapper[26425]: I0217 15:48:35.787176 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c5cd8d-bjbtl" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-httpd" containerID="cri-o://e5d754197dae0faa508e306cc685012aea92994c0e4e9de5c979da8062894785" gracePeriod=30 Feb 17 15:48:36.048374 master-0 kubenswrapper[26425]: I0217 15:48:36.046438 26425 generic.go:334] "Generic (PLEG): container finished" podID="5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" containerID="fbd7bd9ececd5cc6a0ff7ae37eb6d6c44ae7099e4315291721f390452760d9e0" exitCode=0 Feb 17 15:48:36.048374 master-0 kubenswrapper[26425]: I0217 15:48:36.046660 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" event={"ID":"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8","Type":"ContainerDied","Data":"fbd7bd9ececd5cc6a0ff7ae37eb6d6c44ae7099e4315291721f390452760d9e0"} Feb 17 15:48:36.072477 master-0 kubenswrapper[26425]: I0217 15:48:36.062635 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pbs2f" event={"ID":"0531c200-ea9b-4ed4-8e7a-ef60e88b8447","Type":"ContainerStarted","Data":"d423eed300799e9cc4d1570f67629003077b5c16d1b830d2104c74b29a6ad990"} Feb 17 15:48:36.083741 master-0 kubenswrapper[26425]: I0217 15:48:36.079010 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-69tfm" event={"ID":"d44bf429-8fa4-486c-ab29-eea74da59e3d","Type":"ContainerStarted","Data":"6a0bc9d9d1cc5cd5109ffc72849a04122b21698823b3fb0996bf7486e598b42e"} Feb 17 15:48:36.116886 master-0 kubenswrapper[26425]: I0217 15:48:36.116814 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"1bce57f6e43bd22c0b2dd7dbc6ec44bf1532d823a33b1a146c3e4f86cace202b"} Feb 17 15:48:36.131611 master-0 kubenswrapper[26425]: I0217 15:48:36.120891 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-x86bq" event={"ID":"0704cefc-181d-40ab-ba9c-a204b5f85727","Type":"ContainerStarted","Data":"0def1e902fd277938055117ef5e556aeeca6a61c9aab95a4a574f56cd2a7c5ac"} Feb 17 15:48:36.131611 master-0 kubenswrapper[26425]: I0217 15:48:36.128075 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ba8787c2-9caf-4bc9-aa48-4bd83ba39ee0","Type":"ContainerStarted","Data":"e78e96407d005f08b61c651a0ba3e360d2e831e547ca94caa1f3a040d72c0de5"} Feb 17 15:48:36.139790 master-0 kubenswrapper[26425]: I0217 15:48:36.139735 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" event={"ID":"989291d4-860c-47c4-9042-1a99791aafbb","Type":"ContainerStarted","Data":"a74392513f5f1c5d2e88edf42c8898a02752b7d21666cce24ad924dea17cbcf2"} Feb 17 15:48:36.149424 master-0 kubenswrapper[26425]: I0217 15:48:36.149297 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" event={"ID":"ea8f52d0-e4bb-4457-b7f7-33133e152096","Type":"ContainerStarted","Data":"c0e40c65c0f1e845d741453ffc75480cd95e82f1f95bd014ec8d3d733a12803e"} Feb 17 15:48:36.149935 master-0 kubenswrapper[26425]: I0217 15:48:36.149810 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:48:36.152248 master-0 kubenswrapper[26425]: I0217 15:48:36.152196 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" event={"ID":"60f15465-91a0-44b7-813b-7b3d36d81bd5","Type":"ContainerStarted","Data":"4f5a04768269329cc7afdfaa29dde4d88746161efd2f4d4256568136ef8459f2"} Feb 17 15:48:36.271353 master-0 kubenswrapper[26425]: I0217 15:48:36.271295 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:48:36.271607 master-0 kubenswrapper[26425]: I0217 15:48:36.271580 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b57c6d9b6-frt4v" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-log" containerID="cri-o://2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26" gracePeriod=30 Feb 17 15:48:36.271756 master-0 kubenswrapper[26425]: I0217 15:48:36.271729 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b57c6d9b6-frt4v" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-api" containerID="cri-o://61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a" gracePeriod=30 Feb 17 15:48:36.343459 master-0 kubenswrapper[26425]: I0217 15:48:36.343347 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.648476916 podStartE2EDuration="28.343328365s" podCreationTimestamp="2026-02-17 15:48:08 +0000 UTC" firstStartedPulling="2026-02-17 15:48:11.307926247 +0000 UTC m=+1953.199650065" lastFinishedPulling="2026-02-17 15:48:35.002777706 +0000 UTC m=+1976.894501514" observedRunningTime="2026-02-17 15:48:36.30308049 +0000 UTC m=+1978.194804308" watchObservedRunningTime="2026-02-17 15:48:36.343328365 +0000 UTC m=+1978.235052183" Feb 17 15:48:36.393565 master-0 kubenswrapper[26425]: I0217 15:48:36.389080 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" podStartSLOduration=22.389056942 podStartE2EDuration="22.389056942s" podCreationTimestamp="2026-02-17 15:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:36.344070872 +0000 UTC m=+1978.235794690" watchObservedRunningTime="2026-02-17 15:48:36.389056942 +0000 UTC m=+1978.280780760" Feb 17 15:48:36.405491 master-0 kubenswrapper[26425]: I0217 15:48:36.402128 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-x86bq" podStartSLOduration=2.719492189 podStartE2EDuration="25.402111155s" podCreationTimestamp="2026-02-17 15:48:11 +0000 UTC" firstStartedPulling="2026-02-17 15:48:12.243290758 +0000 UTC m=+1954.135014576" lastFinishedPulling="2026-02-17 15:48:34.925909734 +0000 UTC m=+1976.817633542" observedRunningTime="2026-02-17 15:48:36.38193193 +0000 UTC m=+1978.273655748" watchObservedRunningTime="2026-02-17 15:48:36.402111155 +0000 UTC m=+1978.293834973" Feb 17 15:48:36.432617 master-0 kubenswrapper[26425]: I0217 15:48:36.431307 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-pbs2f" podStartSLOduration=22.431283545 podStartE2EDuration="22.431283545s" podCreationTimestamp="2026-02-17 15:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:36.420947516 +0000 UTC m=+1978.312671364" watchObservedRunningTime="2026-02-17 15:48:36.431283545 +0000 UTC m=+1978.323007363" Feb 17 15:48:36.642688 master-0 kubenswrapper[26425]: I0217 15:48:36.640607 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-69tfm" podStartSLOduration=22.640589664 podStartE2EDuration="22.640589664s" podCreationTimestamp="2026-02-17 15:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:36.614059947 +0000 UTC m=+1978.505783765" watchObservedRunningTime="2026-02-17 15:48:36.640589664 +0000 UTC m=+1978.532313482" Feb 17 15:48:36.768008 master-0 kubenswrapper[26425]: I0217 15:48:36.767953 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:36.878040 master-0 kubenswrapper[26425]: I0217 15:48:36.877955 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwtpk\" (UniqueName: \"kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk\") pod \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " Feb 17 15:48:36.878262 master-0 kubenswrapper[26425]: I0217 15:48:36.878144 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts\") pod \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\" (UID: \"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e\") " Feb 17 15:48:36.878884 master-0 kubenswrapper[26425]: I0217 15:48:36.878823 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" (UID: "ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:36.880852 master-0 kubenswrapper[26425]: I0217 15:48:36.880803 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk" (OuterVolumeSpecName: "kube-api-access-qwtpk") pod "ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" (UID: "ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e"). InnerVolumeSpecName "kube-api-access-qwtpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:36.980799 master-0 kubenswrapper[26425]: I0217 15:48:36.980690 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwtpk\" (UniqueName: \"kubernetes.io/projected/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-kube-api-access-qwtpk\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:36.980799 master-0 kubenswrapper[26425]: I0217 15:48:36.980747 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:37.217973 master-0 kubenswrapper[26425]: I0217 15:48:37.217750 26425 generic.go:334] "Generic (PLEG): container finished" podID="0531c200-ea9b-4ed4-8e7a-ef60e88b8447" containerID="d423eed300799e9cc4d1570f67629003077b5c16d1b830d2104c74b29a6ad990" exitCode=0 Feb 17 15:48:37.217973 master-0 kubenswrapper[26425]: I0217 15:48:37.217846 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pbs2f" event={"ID":"0531c200-ea9b-4ed4-8e7a-ef60e88b8447","Type":"ContainerDied","Data":"d423eed300799e9cc4d1570f67629003077b5c16d1b830d2104c74b29a6ad990"} Feb 17 15:48:37.240864 master-0 kubenswrapper[26425]: I0217 15:48:37.240797 26425 generic.go:334] "Generic (PLEG): container finished" podID="595c3aef-36e6-4a07-ad78-32535353193d" containerID="2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26" exitCode=143 Feb 17 15:48:37.241094 master-0 kubenswrapper[26425]: I0217 15:48:37.240901 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerDied","Data":"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26"} Feb 17 15:48:37.262588 master-0 kubenswrapper[26425]: I0217 15:48:37.262526 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-87e5-account-create-update-45dj5" event={"ID":"ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e","Type":"ContainerDied","Data":"95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c"} Feb 17 15:48:37.262588 master-0 kubenswrapper[26425]: I0217 15:48:37.262584 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ce71f73ea350f9e90e39ea5249f295081f5369e029b5b330e00da2bb939f0c" Feb 17 15:48:37.262928 master-0 kubenswrapper[26425]: I0217 15:48:37.262668 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-87e5-account-create-update-45dj5" Feb 17 15:48:37.287002 master-0 kubenswrapper[26425]: I0217 15:48:37.285742 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" event={"ID":"989291d4-860c-47c4-9042-1a99791aafbb","Type":"ContainerStarted","Data":"05a7fc6d05815f882a7ac66dd3f9991ea30535e508a0d5cd4f56dede7fbd5854"} Feb 17 15:48:37.287002 master-0 kubenswrapper[26425]: I0217 15:48:37.286833 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:37.287002 master-0 kubenswrapper[26425]: I0217 15:48:37.286858 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:37.319203 master-0 kubenswrapper[26425]: I0217 15:48:37.316139 26425 generic.go:334] "Generic (PLEG): container finished" podID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerID="e5d754197dae0faa508e306cc685012aea92994c0e4e9de5c979da8062894785" exitCode=0 Feb 17 15:48:37.319203 master-0 kubenswrapper[26425]: I0217 15:48:37.316245 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerDied","Data":"e5d754197dae0faa508e306cc685012aea92994c0e4e9de5c979da8062894785"} Feb 17 15:48:37.346183 master-0 kubenswrapper[26425]: I0217 15:48:37.345953 26425 generic.go:334] "Generic (PLEG): container finished" podID="60f15465-91a0-44b7-813b-7b3d36d81bd5" containerID="4f5a04768269329cc7afdfaa29dde4d88746161efd2f4d4256568136ef8459f2" exitCode=0 Feb 17 15:48:37.346183 master-0 kubenswrapper[26425]: I0217 15:48:37.346137 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" event={"ID":"60f15465-91a0-44b7-813b-7b3d36d81bd5","Type":"ContainerDied","Data":"4f5a04768269329cc7afdfaa29dde4d88746161efd2f4d4256568136ef8459f2"} Feb 17 15:48:37.364499 master-0 kubenswrapper[26425]: I0217 15:48:37.359878 26425 generic.go:334] "Generic (PLEG): container finished" podID="d44bf429-8fa4-486c-ab29-eea74da59e3d" containerID="6a0bc9d9d1cc5cd5109ffc72849a04122b21698823b3fb0996bf7486e598b42e" exitCode=0 Feb 17 15:48:37.364499 master-0 kubenswrapper[26425]: I0217 15:48:37.360894 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-69tfm" event={"ID":"d44bf429-8fa4-486c-ab29-eea74da59e3d","Type":"ContainerDied","Data":"6a0bc9d9d1cc5cd5109ffc72849a04122b21698823b3fb0996bf7486e598b42e"} Feb 17 15:48:37.943166 master-0 kubenswrapper[26425]: I0217 15:48:37.942806 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:37.957489 master-0 kubenswrapper[26425]: I0217 15:48:37.956419 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" podStartSLOduration=23.956392789 podStartE2EDuration="23.956392789s" podCreationTimestamp="2026-02-17 15:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:37.93185261 +0000 UTC m=+1979.823576438" watchObservedRunningTime="2026-02-17 15:48:37.956392789 +0000 UTC m=+1979.848116627" Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.018757 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts\") pod \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.018859 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28v9\" (UniqueName: \"kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9\") pod \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\" (UID: \"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8\") " Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.020111 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" (UID: "5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.022955 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9" (OuterVolumeSpecName: "kube-api-access-t28v9") pod "5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" (UID: "5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8"). InnerVolumeSpecName "kube-api-access-t28v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.024181 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.024422 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-7b9c2-default-external-api-0" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-log" containerID="cri-o://f27f0c55b8344662ca7b8b23e847884c8141558a6fca1bc3149e931153c0e3fd" gracePeriod=30 Feb 17 15:48:38.026494 master-0 kubenswrapper[26425]: I0217 15:48:38.024541 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-7b9c2-default-external-api-0" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-httpd" containerID="cri-o://f9143206c8ce19f8473ed081cb0ee830a0bc1a6768a30c3ffb31795ad772d91f" gracePeriod=30 Feb 17 15:48:38.124714 master-0 kubenswrapper[26425]: I0217 15:48:38.124643 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:38.124714 master-0 kubenswrapper[26425]: I0217 15:48:38.124695 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28v9\" (UniqueName: \"kubernetes.io/projected/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8-kube-api-access-t28v9\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:38.389527 master-0 kubenswrapper[26425]: I0217 15:48:38.385858 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" Feb 17 15:48:38.389527 master-0 kubenswrapper[26425]: I0217 15:48:38.385859 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f7f8-account-create-update-2x5s2" event={"ID":"5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8","Type":"ContainerDied","Data":"d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883"} Feb 17 15:48:38.389527 master-0 kubenswrapper[26425]: I0217 15:48:38.386008 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09d754c14b1c19bea986033e3dcf680d31f82f51b38bf6ea79d7f3fc95a4883" Feb 17 15:48:38.390534 master-0 kubenswrapper[26425]: I0217 15:48:38.389917 26425 generic.go:334] "Generic (PLEG): container finished" podID="0704cefc-181d-40ab-ba9c-a204b5f85727" containerID="0def1e902fd277938055117ef5e556aeeca6a61c9aab95a4a574f56cd2a7c5ac" exitCode=0 Feb 17 15:48:38.390534 master-0 kubenswrapper[26425]: I0217 15:48:38.390002 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-x86bq" event={"ID":"0704cefc-181d-40ab-ba9c-a204b5f85727","Type":"ContainerDied","Data":"0def1e902fd277938055117ef5e556aeeca6a61c9aab95a4a574f56cd2a7c5ac"} Feb 17 15:48:38.399634 master-0 kubenswrapper[26425]: I0217 15:48:38.399564 26425 generic.go:334] "Generic (PLEG): container finished" podID="46e17198-94a2-469f-8d1c-34138a1e2420" containerID="f27f0c55b8344662ca7b8b23e847884c8141558a6fca1bc3149e931153c0e3fd" exitCode=143 Feb 17 15:48:38.473176 master-0 kubenswrapper[26425]: I0217 15:48:38.473109 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerDied","Data":"f27f0c55b8344662ca7b8b23e847884c8141558a6fca1bc3149e931153c0e3fd"} Feb 17 15:48:38.837523 master-0 kubenswrapper[26425]: I0217 15:48:38.834180 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:38.967543 master-0 kubenswrapper[26425]: I0217 15:48:38.966915 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tckpt\" (UniqueName: \"kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt\") pod \"60f15465-91a0-44b7-813b-7b3d36d81bd5\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " Feb 17 15:48:38.967543 master-0 kubenswrapper[26425]: I0217 15:48:38.967176 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts\") pod \"60f15465-91a0-44b7-813b-7b3d36d81bd5\" (UID: \"60f15465-91a0-44b7-813b-7b3d36d81bd5\") " Feb 17 15:48:38.967811 master-0 kubenswrapper[26425]: I0217 15:48:38.967675 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60f15465-91a0-44b7-813b-7b3d36d81bd5" (UID: "60f15465-91a0-44b7-813b-7b3d36d81bd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:38.970839 master-0 kubenswrapper[26425]: I0217 15:48:38.968143 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60f15465-91a0-44b7-813b-7b3d36d81bd5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:38.976744 master-0 kubenswrapper[26425]: I0217 15:48:38.976680 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt" (OuterVolumeSpecName: "kube-api-access-tckpt") pod "60f15465-91a0-44b7-813b-7b3d36d81bd5" (UID: "60f15465-91a0-44b7-813b-7b3d36d81bd5"). InnerVolumeSpecName "kube-api-access-tckpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:39.082327 master-0 kubenswrapper[26425]: I0217 15:48:39.082292 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tckpt\" (UniqueName: \"kubernetes.io/projected/60f15465-91a0-44b7-813b-7b3d36d81bd5-kube-api-access-tckpt\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:39.188483 master-0 kubenswrapper[26425]: I0217 15:48:39.188411 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:39.193951 master-0 kubenswrapper[26425]: I0217 15:48:39.193899 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:39.285263 master-0 kubenswrapper[26425]: I0217 15:48:39.285090 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96f67\" (UniqueName: \"kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67\") pod \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " Feb 17 15:48:39.285523 master-0 kubenswrapper[26425]: I0217 15:48:39.285266 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts\") pod \"d44bf429-8fa4-486c-ab29-eea74da59e3d\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " Feb 17 15:48:39.285523 master-0 kubenswrapper[26425]: I0217 15:48:39.285340 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts\") pod \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\" (UID: \"0531c200-ea9b-4ed4-8e7a-ef60e88b8447\") " Feb 17 15:48:39.285523 master-0 kubenswrapper[26425]: I0217 15:48:39.285376 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gxjg\" (UniqueName: \"kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg\") pod \"d44bf429-8fa4-486c-ab29-eea74da59e3d\" (UID: \"d44bf429-8fa4-486c-ab29-eea74da59e3d\") " Feb 17 15:48:39.286387 master-0 kubenswrapper[26425]: I0217 15:48:39.286352 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0531c200-ea9b-4ed4-8e7a-ef60e88b8447" (UID: "0531c200-ea9b-4ed4-8e7a-ef60e88b8447"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:39.286993 master-0 kubenswrapper[26425]: I0217 15:48:39.286932 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d44bf429-8fa4-486c-ab29-eea74da59e3d" (UID: "d44bf429-8fa4-486c-ab29-eea74da59e3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:48:39.288534 master-0 kubenswrapper[26425]: I0217 15:48:39.288434 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67" (OuterVolumeSpecName: "kube-api-access-96f67") pod "0531c200-ea9b-4ed4-8e7a-ef60e88b8447" (UID: "0531c200-ea9b-4ed4-8e7a-ef60e88b8447"). InnerVolumeSpecName "kube-api-access-96f67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:39.289886 master-0 kubenswrapper[26425]: I0217 15:48:39.289837 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg" (OuterVolumeSpecName: "kube-api-access-2gxjg") pod "d44bf429-8fa4-486c-ab29-eea74da59e3d" (UID: "d44bf429-8fa4-486c-ab29-eea74da59e3d"). InnerVolumeSpecName "kube-api-access-2gxjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:39.401154 master-0 kubenswrapper[26425]: I0217 15:48:39.400958 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96f67\" (UniqueName: \"kubernetes.io/projected/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-kube-api-access-96f67\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:39.401154 master-0 kubenswrapper[26425]: I0217 15:48:39.401005 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44bf429-8fa4-486c-ab29-eea74da59e3d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:39.401154 master-0 kubenswrapper[26425]: I0217 15:48:39.401017 26425 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0531c200-ea9b-4ed4-8e7a-ef60e88b8447-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:39.401154 master-0 kubenswrapper[26425]: I0217 15:48:39.401028 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gxjg\" (UniqueName: \"kubernetes.io/projected/d44bf429-8fa4-486c-ab29-eea74da59e3d-kube-api-access-2gxjg\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:39.412909 master-0 kubenswrapper[26425]: I0217 15:48:39.412863 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" Feb 17 15:48:39.413137 master-0 kubenswrapper[26425]: I0217 15:48:39.412856 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cd4-account-create-update-hwzx4" event={"ID":"60f15465-91a0-44b7-813b-7b3d36d81bd5","Type":"ContainerDied","Data":"39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7"} Feb 17 15:48:39.413137 master-0 kubenswrapper[26425]: I0217 15:48:39.412992 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39065c59caf42c1e8824a00bd051c662567cd1458c0cdc4de9dcde3da3e9f5c7" Feb 17 15:48:39.414608 master-0 kubenswrapper[26425]: I0217 15:48:39.414581 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-69tfm" Feb 17 15:48:39.414700 master-0 kubenswrapper[26425]: I0217 15:48:39.414610 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-69tfm" event={"ID":"d44bf429-8fa4-486c-ab29-eea74da59e3d","Type":"ContainerDied","Data":"a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485"} Feb 17 15:48:39.414700 master-0 kubenswrapper[26425]: I0217 15:48:39.414664 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a579aadc1190d3ee953e0203a08232cb76e74171d4bd405ddad8ff2fb22d4485" Feb 17 15:48:39.417390 master-0 kubenswrapper[26425]: I0217 15:48:39.417077 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pbs2f" event={"ID":"0531c200-ea9b-4ed4-8e7a-ef60e88b8447","Type":"ContainerDied","Data":"ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1"} Feb 17 15:48:39.417390 master-0 kubenswrapper[26425]: I0217 15:48:39.417164 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea1d8bd4f3cee4962015c3540146d329af760f2ff1971e21e26ca35d58e6dfd1" Feb 17 15:48:39.417390 master-0 kubenswrapper[26425]: I0217 15:48:39.417229 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pbs2f" Feb 17 15:48:39.876501 master-0 kubenswrapper[26425]: I0217 15:48:39.876396 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:40.020148 master-0 kubenswrapper[26425]: I0217 15:48:40.016954 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjrx\" (UniqueName: \"kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020148 master-0 kubenswrapper[26425]: I0217 15:48:40.017143 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020148 master-0 kubenswrapper[26425]: I0217 15:48:40.017165 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020148 master-0 kubenswrapper[26425]: I0217 15:48:40.017218 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020540 master-0 kubenswrapper[26425]: I0217 15:48:40.020230 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020540 master-0 kubenswrapper[26425]: I0217 15:48:40.020415 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.020540 master-0 kubenswrapper[26425]: I0217 15:48:40.020440 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle\") pod \"0704cefc-181d-40ab-ba9c-a204b5f85727\" (UID: \"0704cefc-181d-40ab-ba9c-a204b5f85727\") " Feb 17 15:48:40.027492 master-0 kubenswrapper[26425]: I0217 15:48:40.022458 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:40.027492 master-0 kubenswrapper[26425]: I0217 15:48:40.022572 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:40.027492 master-0 kubenswrapper[26425]: I0217 15:48:40.023922 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 15:48:40.027492 master-0 kubenswrapper[26425]: I0217 15:48:40.025781 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx" (OuterVolumeSpecName: "kube-api-access-jdjrx") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "kube-api-access-jdjrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:40.027978 master-0 kubenswrapper[26425]: I0217 15:48:40.027932 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts" (OuterVolumeSpecName: "scripts") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.063609 master-0 kubenswrapper[26425]: I0217 15:48:40.063547 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config" (OuterVolumeSpecName: "config") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.067320 master-0 kubenswrapper[26425]: I0217 15:48:40.067266 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0704cefc-181d-40ab-ba9c-a204b5f85727" (UID: "0704cefc-181d-40ab-ba9c-a204b5f85727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.124865 master-0 kubenswrapper[26425]: I0217 15:48:40.124803 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.124865 master-0 kubenswrapper[26425]: I0217 15:48:40.124859 26425 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0704cefc-181d-40ab-ba9c-a204b5f85727-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.125070 master-0 kubenswrapper[26425]: I0217 15:48:40.124884 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.125070 master-0 kubenswrapper[26425]: I0217 15:48:40.124915 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0704cefc-181d-40ab-ba9c-a204b5f85727-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.125070 master-0 kubenswrapper[26425]: I0217 15:48:40.124939 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.125070 master-0 kubenswrapper[26425]: I0217 15:48:40.124957 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704cefc-181d-40ab-ba9c-a204b5f85727-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.125070 master-0 kubenswrapper[26425]: I0217 15:48:40.124976 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjrx\" (UniqueName: \"kubernetes.io/projected/0704cefc-181d-40ab-ba9c-a204b5f85727-kube-api-access-jdjrx\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.234013 master-0 kubenswrapper[26425]: I0217 15:48:40.233957 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.431506 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.431800 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.431887 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.431923 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.432015 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tgp\" (UniqueName: \"kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.432156 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.432569 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs" (OuterVolumeSpecName: "logs") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.432842 26425 generic.go:334] "Generic (PLEG): container finished" podID="1c26c340-473b-49c9-a62f-1915fac7b655" containerID="1bce57f6e43bd22c0b2dd7dbc6ec44bf1532d823a33b1a146c3e4f86cace202b" exitCode=0 Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.432956 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerDied","Data":"1bce57f6e43bd22c0b2dd7dbc6ec44bf1532d823a33b1a146c3e4f86cace202b"} Feb 17 15:48:40.434619 master-0 kubenswrapper[26425]: I0217 15:48:40.433022 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs\") pod \"595c3aef-36e6-4a07-ad78-32535353193d\" (UID: \"595c3aef-36e6-4a07-ad78-32535353193d\") " Feb 17 15:48:40.437351 master-0 kubenswrapper[26425]: I0217 15:48:40.435796 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp" (OuterVolumeSpecName: "kube-api-access-q7tgp") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "kube-api-access-q7tgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:40.437351 master-0 kubenswrapper[26425]: I0217 15:48:40.436196 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/595c3aef-36e6-4a07-ad78-32535353193d-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.437351 master-0 kubenswrapper[26425]: I0217 15:48:40.436213 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7tgp\" (UniqueName: \"kubernetes.io/projected/595c3aef-36e6-4a07-ad78-32535353193d-kube-api-access-q7tgp\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.437351 master-0 kubenswrapper[26425]: I0217 15:48:40.436466 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:48:40.439603 master-0 kubenswrapper[26425]: I0217 15:48:40.439423 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts" (OuterVolumeSpecName: "scripts") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.439796 master-0 kubenswrapper[26425]: I0217 15:48:40.439760 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-x86bq" Feb 17 15:48:40.440298 master-0 kubenswrapper[26425]: I0217 15:48:40.440236 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-x86bq" event={"ID":"0704cefc-181d-40ab-ba9c-a204b5f85727","Type":"ContainerDied","Data":"bbf3236939b662b78ad80297b4b957484d81e9f6d534717ffa79c9b9eb94cf56"} Feb 17 15:48:40.440298 master-0 kubenswrapper[26425]: I0217 15:48:40.440280 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf3236939b662b78ad80297b4b957484d81e9f6d534717ffa79c9b9eb94cf56" Feb 17 15:48:40.447684 master-0 kubenswrapper[26425]: I0217 15:48:40.447628 26425 generic.go:334] "Generic (PLEG): container finished" podID="595c3aef-36e6-4a07-ad78-32535353193d" containerID="61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a" exitCode=0 Feb 17 15:48:40.447684 master-0 kubenswrapper[26425]: I0217 15:48:40.447659 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerDied","Data":"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a"} Feb 17 15:48:40.447817 master-0 kubenswrapper[26425]: I0217 15:48:40.447725 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b57c6d9b6-frt4v" event={"ID":"595c3aef-36e6-4a07-ad78-32535353193d","Type":"ContainerDied","Data":"136586aeefac49fc012fd9245bf9ef96c0bb04b87aa7754648c745beaf013e25"} Feb 17 15:48:40.447817 master-0 kubenswrapper[26425]: I0217 15:48:40.447749 26425 scope.go:117] "RemoveContainer" containerID="61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a" Feb 17 15:48:40.448015 master-0 kubenswrapper[26425]: I0217 15:48:40.447982 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b57c6d9b6-frt4v" Feb 17 15:48:40.502828 master-0 kubenswrapper[26425]: I0217 15:48:40.502765 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data" (OuterVolumeSpecName: "config-data") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.503651 master-0 kubenswrapper[26425]: I0217 15:48:40.503610 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.538449 master-0 kubenswrapper[26425]: I0217 15:48:40.538394 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.538449 master-0 kubenswrapper[26425]: I0217 15:48:40.538441 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.538630 master-0 kubenswrapper[26425]: I0217 15:48:40.538480 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.559906 master-0 kubenswrapper[26425]: I0217 15:48:40.559847 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.564434 master-0 kubenswrapper[26425]: I0217 15:48:40.564339 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "595c3aef-36e6-4a07-ad78-32535353193d" (UID: "595c3aef-36e6-4a07-ad78-32535353193d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:40.639670 master-0 kubenswrapper[26425]: I0217 15:48:40.639604 26425 scope.go:117] "RemoveContainer" containerID="2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26" Feb 17 15:48:40.641162 master-0 kubenswrapper[26425]: I0217 15:48:40.641109 26425 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.641162 master-0 kubenswrapper[26425]: I0217 15:48:40.641153 26425 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/595c3aef-36e6-4a07-ad78-32535353193d-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:40.661045 master-0 kubenswrapper[26425]: I0217 15:48:40.660985 26425 scope.go:117] "RemoveContainer" containerID="61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a" Feb 17 15:48:40.661511 master-0 kubenswrapper[26425]: E0217 15:48:40.661428 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a\": container with ID starting with 61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a not found: ID does not exist" containerID="61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a" Feb 17 15:48:40.661511 master-0 kubenswrapper[26425]: I0217 15:48:40.661495 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a"} err="failed to get container status \"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a\": rpc error: code = NotFound desc = could not find container \"61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a\": container with ID starting with 61340d49a27c2f36eecd9eae5e4989b978dd7bd2c87a59a336928f48c792664a not found: ID does not exist" Feb 17 15:48:40.661687 master-0 kubenswrapper[26425]: I0217 15:48:40.661525 26425 scope.go:117] "RemoveContainer" containerID="2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26" Feb 17 15:48:40.662056 master-0 kubenswrapper[26425]: E0217 15:48:40.661941 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26\": container with ID starting with 2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26 not found: ID does not exist" containerID="2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26" Feb 17 15:48:40.662056 master-0 kubenswrapper[26425]: I0217 15:48:40.661980 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26"} err="failed to get container status \"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26\": rpc error: code = NotFound desc = could not find container \"2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26\": container with ID starting with 2ca4e33b1d685eb098e1a7e190a5cd7938eca0fe6ca209cefd81e17ba07dae26 not found: ID does not exist" Feb 17 15:48:40.889168 master-0 kubenswrapper[26425]: I0217 15:48:40.889100 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:48:40.901954 master-0 kubenswrapper[26425]: I0217 15:48:40.901693 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5b57c6d9b6-frt4v"] Feb 17 15:48:41.462983 master-0 kubenswrapper[26425]: I0217 15:48:41.462558 26425 generic.go:334] "Generic (PLEG): container finished" podID="46e17198-94a2-469f-8d1c-34138a1e2420" containerID="f9143206c8ce19f8473ed081cb0ee830a0bc1a6768a30c3ffb31795ad772d91f" exitCode=0 Feb 17 15:48:41.462983 master-0 kubenswrapper[26425]: I0217 15:48:41.462617 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerDied","Data":"f9143206c8ce19f8473ed081cb0ee830a0bc1a6768a30c3ffb31795ad772d91f"} Feb 17 15:48:41.923484 master-0 kubenswrapper[26425]: I0217 15:48:41.922386 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.977678 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.977749 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.977818 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svd7c\" (UniqueName: \"kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.977896 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.977992 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.978119 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.978187 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.978213 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:41.980225 master-0 kubenswrapper[26425]: I0217 15:48:41.979579 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs" (OuterVolumeSpecName: "logs") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:41.981058 master-0 kubenswrapper[26425]: I0217 15:48:41.981013 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:42.003031 master-0 kubenswrapper[26425]: I0217 15:48:42.002950 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts" (OuterVolumeSpecName: "scripts") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:42.003721 master-0 kubenswrapper[26425]: I0217 15:48:42.003645 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c" (OuterVolumeSpecName: "kube-api-access-svd7c") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "kube-api-access-svd7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:42.016973 master-0 kubenswrapper[26425]: I0217 15:48:42.016907 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:42.063512 master-0 kubenswrapper[26425]: I0217 15:48:42.062722 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:42.080484 master-0 kubenswrapper[26425]: I0217 15:48:42.080417 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data" (OuterVolumeSpecName: "config-data") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:42.081246 master-0 kubenswrapper[26425]: I0217 15:48:42.081179 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") pod \"46e17198-94a2-469f-8d1c-34138a1e2420\" (UID: \"46e17198-94a2-469f-8d1c-34138a1e2420\") " Feb 17 15:48:42.081854 master-0 kubenswrapper[26425]: W0217 15:48:42.081316 26425 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/46e17198-94a2-469f-8d1c-34138a1e2420/volumes/kubernetes.io~secret/config-data Feb 17 15:48:42.081854 master-0 kubenswrapper[26425]: I0217 15:48:42.081350 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data" (OuterVolumeSpecName: "config-data") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088577 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088630 26425 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088644 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e17198-94a2-469f-8d1c-34138a1e2420-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088658 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088668 26425 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088678 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svd7c\" (UniqueName: \"kubernetes.io/projected/46e17198-94a2-469f-8d1c-34138a1e2420-kube-api-access-svd7c\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.091124 master-0 kubenswrapper[26425]: I0217 15:48:42.088687 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e17198-94a2-469f-8d1c-34138a1e2420-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.102664 master-0 kubenswrapper[26425]: I0217 15:48:42.101724 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c" (OuterVolumeSpecName: "glance") pod "46e17198-94a2-469f-8d1c-34138a1e2420" (UID: "46e17198-94a2-469f-8d1c-34138a1e2420"). InnerVolumeSpecName "pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:48:42.172289 master-0 kubenswrapper[26425]: I0217 15:48:42.172229 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-88dd96889-vwkh6" Feb 17 15:48:42.190390 master-0 kubenswrapper[26425]: I0217 15:48:42.190349 26425 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") on node \"master-0\" " Feb 17 15:48:42.237774 master-0 kubenswrapper[26425]: I0217 15:48:42.237586 26425 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 15:48:42.238560 master-0 kubenswrapper[26425]: I0217 15:48:42.238475 26425 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884" (UniqueName: "kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c") on node "master-0" Feb 17 15:48:42.297502 master-0 kubenswrapper[26425]: I0217 15:48:42.293721 26425 reconciler_common.go:293] "Volume detached for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:42.377345 master-0 kubenswrapper[26425]: I0217 15:48:42.377289 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:42.377728 master-0 kubenswrapper[26425]: I0217 15:48:42.377676 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-7b9c2-default-internal-api-0" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" containerID="cri-o://15801b47cfb7a0d53554af977658ddff8f9471db68d684526a5ea6cd4d82e176" gracePeriod=30 Feb 17 15:48:42.378042 master-0 kubenswrapper[26425]: I0217 15:48:42.377968 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-7b9c2-default-internal-api-0" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-httpd" containerID="cri-o://3b3ebec1c2e6e4204d4e1cecb8899d580c3baf1dd7f05ccef4f4a4a27dd8fd3d" gracePeriod=30 Feb 17 15:48:42.386054 master-0 kubenswrapper[26425]: I0217 15:48:42.385606 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-7b9c2-default-internal-api-0" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.218:9292/healthcheck\": EOF" Feb 17 15:48:42.386240 master-0 kubenswrapper[26425]: I0217 15:48:42.385819 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-7b9c2-default-internal-api-0" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.218:9292/healthcheck\": EOF" Feb 17 15:48:42.386240 master-0 kubenswrapper[26425]: I0217 15:48:42.386095 26425 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-7b9c2-default-internal-api-0" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.218:9292/healthcheck\": EOF" Feb 17 15:48:42.413372 master-0 kubenswrapper[26425]: I0217 15:48:42.413319 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595c3aef-36e6-4a07-ad78-32535353193d" path="/var/lib/kubelet/pods/595c3aef-36e6-4a07-ad78-32535353193d/volumes" Feb 17 15:48:42.476582 master-0 kubenswrapper[26425]: I0217 15:48:42.476516 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:42.477157 master-0 kubenswrapper[26425]: I0217 15:48:42.476521 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"46e17198-94a2-469f-8d1c-34138a1e2420","Type":"ContainerDied","Data":"04d6e64ccbb04c93923a9a322da93072c46f1453e0f2ea638d620379af884415"} Feb 17 15:48:42.477157 master-0 kubenswrapper[26425]: I0217 15:48:42.476778 26425 scope.go:117] "RemoveContainer" containerID="f9143206c8ce19f8473ed081cb0ee830a0bc1a6768a30c3ffb31795ad772d91f" Feb 17 15:48:42.482226 master-0 kubenswrapper[26425]: I0217 15:48:42.482190 26425 generic.go:334] "Generic (PLEG): container finished" podID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerID="fdb8892881461652575531c8f135056b00711a9f7fe6e90bd40559e27cc55139" exitCode=0 Feb 17 15:48:42.482367 master-0 kubenswrapper[26425]: I0217 15:48:42.482231 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerDied","Data":"fdb8892881461652575531c8f135056b00711a9f7fe6e90bd40559e27cc55139"} Feb 17 15:48:42.503932 master-0 kubenswrapper[26425]: I0217 15:48:42.503880 26425 scope.go:117] "RemoveContainer" containerID="f27f0c55b8344662ca7b8b23e847884c8141558a6fca1bc3149e931153c0e3fd" Feb 17 15:48:43.496645 master-0 kubenswrapper[26425]: I0217 15:48:43.496575 26425 generic.go:334] "Generic (PLEG): container finished" podID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerID="15801b47cfb7a0d53554af977658ddff8f9471db68d684526a5ea6cd4d82e176" exitCode=143 Feb 17 15:48:43.497326 master-0 kubenswrapper[26425]: I0217 15:48:43.496646 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerDied","Data":"15801b47cfb7a0d53554af977658ddff8f9471db68d684526a5ea6cd4d82e176"} Feb 17 15:48:44.726100 master-0 kubenswrapper[26425]: I0217 15:48:44.725570 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:44.975347 master-0 kubenswrapper[26425]: I0217 15:48:44.974783 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:44.977599 master-0 kubenswrapper[26425]: I0217 15:48:44.977093 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67bfcfbcf8-m9tkq" Feb 17 15:48:45.559718 master-0 kubenswrapper[26425]: I0217 15:48:45.559642 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:46.433204 master-0 kubenswrapper[26425]: I0217 15:48:46.433148 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" path="/var/lib/kubelet/pods/46e17198-94a2-469f-8d1c-34138a1e2420/volumes" Feb 17 15:48:46.448053 master-0 kubenswrapper[26425]: I0217 15:48:46.447921 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:48:46.511978 master-0 kubenswrapper[26425]: I0217 15:48:46.511902 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle\") pod \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " Feb 17 15:48:46.512267 master-0 kubenswrapper[26425]: I0217 15:48:46.512068 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t65hq\" (UniqueName: \"kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq\") pod \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " Feb 17 15:48:46.512267 master-0 kubenswrapper[26425]: I0217 15:48:46.512207 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs\") pod \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " Feb 17 15:48:46.512485 master-0 kubenswrapper[26425]: I0217 15:48:46.512293 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config\") pod \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " Feb 17 15:48:46.512485 master-0 kubenswrapper[26425]: I0217 15:48:46.512427 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config\") pod \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\" (UID: \"3d5a2ac6-930f-43d0-873f-3bd2cc9df572\") " Feb 17 15:48:46.516162 master-0 kubenswrapper[26425]: I0217 15:48:46.515843 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq" (OuterVolumeSpecName: "kube-api-access-t65hq") pod "3d5a2ac6-930f-43d0-873f-3bd2cc9df572" (UID: "3d5a2ac6-930f-43d0-873f-3bd2cc9df572"). InnerVolumeSpecName "kube-api-access-t65hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:46.518385 master-0 kubenswrapper[26425]: I0217 15:48:46.518293 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3d5a2ac6-930f-43d0-873f-3bd2cc9df572" (UID: "3d5a2ac6-930f-43d0-873f-3bd2cc9df572"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:46.539342 master-0 kubenswrapper[26425]: I0217 15:48:46.539248 26425 generic.go:334] "Generic (PLEG): container finished" podID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerID="3b3ebec1c2e6e4204d4e1cecb8899d580c3baf1dd7f05ccef4f4a4a27dd8fd3d" exitCode=0 Feb 17 15:48:46.539342 master-0 kubenswrapper[26425]: I0217 15:48:46.539341 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerDied","Data":"3b3ebec1c2e6e4204d4e1cecb8899d580c3baf1dd7f05ccef4f4a4a27dd8fd3d"} Feb 17 15:48:46.542310 master-0 kubenswrapper[26425]: I0217 15:48:46.542043 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5cd8d-bjbtl" event={"ID":"3d5a2ac6-930f-43d0-873f-3bd2cc9df572","Type":"ContainerDied","Data":"08828ce8fed9df20e0faccc333455c987dc3296b24dd920b10dd53ed903a736b"} Feb 17 15:48:46.542310 master-0 kubenswrapper[26425]: I0217 15:48:46.542118 26425 scope.go:117] "RemoveContainer" containerID="e5d754197dae0faa508e306cc685012aea92994c0e4e9de5c979da8062894785" Feb 17 15:48:46.542310 master-0 kubenswrapper[26425]: I0217 15:48:46.542076 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5cd8d-bjbtl" Feb 17 15:48:46.580133 master-0 kubenswrapper[26425]: I0217 15:48:46.579991 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d5a2ac6-930f-43d0-873f-3bd2cc9df572" (UID: "3d5a2ac6-930f-43d0-873f-3bd2cc9df572"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:46.605406 master-0 kubenswrapper[26425]: I0217 15:48:46.605336 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config" (OuterVolumeSpecName: "config") pod "3d5a2ac6-930f-43d0-873f-3bd2cc9df572" (UID: "3d5a2ac6-930f-43d0-873f-3bd2cc9df572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:46.615723 master-0 kubenswrapper[26425]: I0217 15:48:46.615651 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:46.615723 master-0 kubenswrapper[26425]: I0217 15:48:46.615703 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t65hq\" (UniqueName: \"kubernetes.io/projected/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-kube-api-access-t65hq\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:46.615723 master-0 kubenswrapper[26425]: I0217 15:48:46.615715 26425 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:46.615723 master-0 kubenswrapper[26425]: I0217 15:48:46.615724 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:46.636442 master-0 kubenswrapper[26425]: I0217 15:48:46.636345 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3d5a2ac6-930f-43d0-873f-3bd2cc9df572" (UID: "3d5a2ac6-930f-43d0-873f-3bd2cc9df572"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:46.696802 master-0 kubenswrapper[26425]: I0217 15:48:46.696641 26425 scope.go:117] "RemoveContainer" containerID="fdb8892881461652575531c8f135056b00711a9f7fe6e90bd40559e27cc55139" Feb 17 15:48:46.717166 master-0 kubenswrapper[26425]: I0217 15:48:46.717102 26425 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5a2ac6-930f-43d0-873f-3bd2cc9df572-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:46.752441 master-0 kubenswrapper[26425]: I0217 15:48:46.752365 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:46.753516 master-0 kubenswrapper[26425]: E0217 15:48:46.753336 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" containerName="mariadb-account-create-update" Feb 17 15:48:46.753516 master-0 kubenswrapper[26425]: I0217 15:48:46.753368 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" containerName="mariadb-account-create-update" Feb 17 15:48:46.753516 master-0 kubenswrapper[26425]: E0217 15:48:46.753495 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f15465-91a0-44b7-813b-7b3d36d81bd5" containerName="mariadb-account-create-update" Feb 17 15:48:46.753516 master-0 kubenswrapper[26425]: I0217 15:48:46.753506 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f15465-91a0-44b7-813b-7b3d36d81bd5" containerName="mariadb-account-create-update" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753536 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-api" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753543 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-api" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753585 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-httpd" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753592 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-httpd" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753623 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0704cefc-181d-40ab-ba9c-a204b5f85727" containerName="ironic-inspector-db-sync" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753630 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="0704cefc-181d-40ab-ba9c-a204b5f85727" containerName="ironic-inspector-db-sync" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753658 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fde6099-c168-43b1-acbf-cbbdc3ca2435" containerName="mariadb-database-create" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753664 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fde6099-c168-43b1-acbf-cbbdc3ca2435" containerName="mariadb-database-create" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753684 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-log" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753695 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-log" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753721 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" containerName="mariadb-account-create-update" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753727 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" containerName="mariadb-account-create-update" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: E0217 15:48:46.753741 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44bf429-8fa4-486c-ab29-eea74da59e3d" containerName="mariadb-database-create" Feb 17 15:48:46.753767 master-0 kubenswrapper[26425]: I0217 15:48:46.753748 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44bf429-8fa4-486c-ab29-eea74da59e3d" containerName="mariadb-database-create" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: E0217 15:48:46.753804 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-api" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.753812 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-api" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: E0217 15:48:46.753839 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0531c200-ea9b-4ed4-8e7a-ef60e88b8447" containerName="mariadb-database-create" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.753845 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="0531c200-ea9b-4ed4-8e7a-ef60e88b8447" containerName="mariadb-database-create" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: E0217 15:48:46.753867 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-log" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.753875 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-log" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: E0217 15:48:46.753899 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-httpd" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.753905 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-httpd" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754170 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-httpd" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754188 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="0531c200-ea9b-4ed4-8e7a-ef60e88b8447" containerName="mariadb-database-create" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754206 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44bf429-8fa4-486c-ab29-eea74da59e3d" containerName="mariadb-database-create" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754224 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" containerName="mariadb-account-create-update" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754234 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-api" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754245 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" containerName="mariadb-account-create-update" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754255 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f15465-91a0-44b7-813b-7b3d36d81bd5" containerName="mariadb-account-create-update" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754273 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-log" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754286 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="0704cefc-181d-40ab-ba9c-a204b5f85727" containerName="ironic-inspector-db-sync" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754300 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-httpd" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754319 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e17198-94a2-469f-8d1c-34138a1e2420" containerName="glance-log" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754335 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="595c3aef-36e6-4a07-ad78-32535353193d" containerName="placement-api" Feb 17 15:48:46.754360 master-0 kubenswrapper[26425]: I0217 15:48:46.754349 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fde6099-c168-43b1-acbf-cbbdc3ca2435" containerName="mariadb-database-create" Feb 17 15:48:46.757620 master-0 kubenswrapper[26425]: I0217 15:48:46.755843 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:46.759448 master-0 kubenswrapper[26425]: I0217 15:48:46.759372 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-default-external-config-data" Feb 17 15:48:46.759448 master-0 kubenswrapper[26425]: I0217 15:48:46.759418 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 15:48:47.008775 master-0 kubenswrapper[26425]: I0217 15:48:47.008708 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245193 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2k5\" (UniqueName: \"kubernetes.io/projected/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-kube-api-access-zf2k5\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245310 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245350 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245654 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245683 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245752 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245770 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.246287 master-0 kubenswrapper[26425]: I0217 15:48:47.245882 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.304527 master-0 kubenswrapper[26425]: I0217 15:48:47.295529 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348265 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf2k5\" (UniqueName: \"kubernetes.io/projected/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-kube-api-access-zf2k5\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348316 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348339 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348387 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348408 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348439 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348469 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.348928 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.356232 master-0 kubenswrapper[26425]: I0217 15:48:47.354137 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-logs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.358858 master-0 kubenswrapper[26425]: I0217 15:48:47.358832 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-httpd-run\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.372482 master-0 kubenswrapper[26425]: I0217 15:48:47.365781 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:48:47.372482 master-0 kubenswrapper[26425]: I0217 15:48:47.365823 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bb1a31da58028daaa8c5693dab9c5e672404499c19a6cf0daa664dd723747ec1/globalmount\"" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.383412 master-0 kubenswrapper[26425]: I0217 15:48:47.375447 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-public-tls-certs\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.383412 master-0 kubenswrapper[26425]: I0217 15:48:47.378012 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-config-data\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.383412 master-0 kubenswrapper[26425]: I0217 15:48:47.383398 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-scripts\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.384575 master-0 kubenswrapper[26425]: I0217 15:48:47.384541 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-combined-ca-bundle\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.400649 master-0 kubenswrapper[26425]: I0217 15:48:47.387680 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c5cd8d-bjbtl"] Feb 17 15:48:47.417481 master-0 kubenswrapper[26425]: I0217 15:48:47.415228 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf2k5\" (UniqueName: \"kubernetes.io/projected/7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2-kube-api-access-zf2k5\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:47.417675 master-0 kubenswrapper[26425]: I0217 15:48:47.417533 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:48:47.484397 master-0 kubenswrapper[26425]: I0217 15:48:47.460676 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.569747 master-0 kubenswrapper[26425]: I0217 15:48:47.563560 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:48:47.596758 master-0 kubenswrapper[26425]: I0217 15:48:47.596649 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:48:47.603853 master-0 kubenswrapper[26425]: I0217 15:48:47.602914 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:48:47.621298 master-0 kubenswrapper[26425]: I0217 15:48:47.621244 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 17 15:48:47.621551 master-0 kubenswrapper[26425]: I0217 15:48:47.621400 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 17 15:48:47.623864 master-0 kubenswrapper[26425]: I0217 15:48:47.623828 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 17 15:48:47.644082 master-0 kubenswrapper[26425]: I0217 15:48:47.643433 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:48:47.679309 master-0 kubenswrapper[26425]: I0217 15:48:47.679273 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.679606 master-0 kubenswrapper[26425]: I0217 15:48:47.679566 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.679822 master-0 kubenswrapper[26425]: I0217 15:48:47.679806 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.679909 master-0 kubenswrapper[26425]: I0217 15:48:47.679895 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkwv\" (UniqueName: \"kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.679988 master-0 kubenswrapper[26425]: I0217 15:48:47.679976 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmh2m\" (UniqueName: \"kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.680062 master-0 kubenswrapper[26425]: I0217 15:48:47.680050 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.680187 master-0 kubenswrapper[26425]: I0217 15:48:47.680167 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.680504 master-0 kubenswrapper[26425]: I0217 15:48:47.680395 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.680910 master-0 kubenswrapper[26425]: I0217 15:48:47.680895 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.681005 master-0 kubenswrapper[26425]: I0217 15:48:47.680992 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.681095 master-0 kubenswrapper[26425]: I0217 15:48:47.681080 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.681245 master-0 kubenswrapper[26425]: I0217 15:48:47.681209 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.681351 master-0 kubenswrapper[26425]: I0217 15:48:47.681336 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.716575 master-0 kubenswrapper[26425]: I0217 15:48:47.716517 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gbxf"] Feb 17 15:48:47.718366 master-0 kubenswrapper[26425]: I0217 15:48:47.718334 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.721700 master-0 kubenswrapper[26425]: I0217 15:48:47.721678 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 15:48:47.721837 master-0 kubenswrapper[26425]: I0217 15:48:47.721820 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 15:48:47.736278 master-0 kubenswrapper[26425]: I0217 15:48:47.736172 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gbxf"] Feb 17 15:48:47.784507 master-0 kubenswrapper[26425]: I0217 15:48:47.783607 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.784507 master-0 kubenswrapper[26425]: I0217 15:48:47.783663 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkwv\" (UniqueName: \"kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.784507 master-0 kubenswrapper[26425]: I0217 15:48:47.783691 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmh2m\" (UniqueName: \"kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.784507 master-0 kubenswrapper[26425]: I0217 15:48:47.783714 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.784507 master-0 kubenswrapper[26425]: I0217 15:48:47.783751 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.784836 master-0 kubenswrapper[26425]: I0217 15:48:47.784542 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.784836 master-0 kubenswrapper[26425]: I0217 15:48:47.784627 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.784836 master-0 kubenswrapper[26425]: I0217 15:48:47.784668 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdvf\" (UniqueName: \"kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.784836 master-0 kubenswrapper[26425]: I0217 15:48:47.784759 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.784963 master-0 kubenswrapper[26425]: I0217 15:48:47.784861 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.784963 master-0 kubenswrapper[26425]: I0217 15:48:47.784889 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.784963 master-0 kubenswrapper[26425]: I0217 15:48:47.784915 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.785296 master-0 kubenswrapper[26425]: I0217 15:48:47.785246 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.785389 master-0 kubenswrapper[26425]: I0217 15:48:47.785361 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.785519 master-0 kubenswrapper[26425]: I0217 15:48:47.785498 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.785686 master-0 kubenswrapper[26425]: I0217 15:48:47.785553 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.785686 master-0 kubenswrapper[26425]: I0217 15:48:47.785649 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.787021 master-0 kubenswrapper[26425]: I0217 15:48:47.786987 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.787685 master-0 kubenswrapper[26425]: I0217 15:48:47.787648 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.788360 master-0 kubenswrapper[26425]: I0217 15:48:47.788328 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.788681 master-0 kubenswrapper[26425]: I0217 15:48:47.788659 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.790210 master-0 kubenswrapper[26425]: I0217 15:48:47.790177 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.792262 master-0 kubenswrapper[26425]: I0217 15:48:47.792193 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.808193 master-0 kubenswrapper[26425]: I0217 15:48:47.808143 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.808370 master-0 kubenswrapper[26425]: I0217 15:48:47.808212 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.808572 master-0 kubenswrapper[26425]: I0217 15:48:47.808526 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.812568 master-0 kubenswrapper[26425]: I0217 15:48:47.812505 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.834795 master-0 kubenswrapper[26425]: I0217 15:48:47.834749 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkwv\" (UniqueName: \"kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.834963 master-0 kubenswrapper[26425]: I0217 15:48:47.834823 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmh2m\" (UniqueName: \"kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m\") pod \"dnsmasq-dns-5f4c4c4d6c-fsk8m\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.838867 master-0 kubenswrapper[26425]: I0217 15:48:47.838816 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config\") pod \"ironic-inspector-0\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " pod="openstack/ironic-inspector-0" Feb 17 15:48:47.877032 master-0 kubenswrapper[26425]: I0217 15:48:47.876374 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:47.895852 master-0 kubenswrapper[26425]: I0217 15:48:47.895081 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.895852 master-0 kubenswrapper[26425]: I0217 15:48:47.895149 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wdvf\" (UniqueName: \"kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.895852 master-0 kubenswrapper[26425]: I0217 15:48:47.895203 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.895852 master-0 kubenswrapper[26425]: I0217 15:48:47.895383 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.901678 master-0 kubenswrapper[26425]: I0217 15:48:47.901630 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.902925 master-0 kubenswrapper[26425]: I0217 15:48:47.902880 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.909763 master-0 kubenswrapper[26425]: I0217 15:48:47.909590 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.913031 master-0 kubenswrapper[26425]: I0217 15:48:47.912998 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wdvf\" (UniqueName: \"kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf\") pod \"nova-cell0-conductor-db-sync-8gbxf\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:47.938558 master-0 kubenswrapper[26425]: I0217 15:48:47.936821 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:48:48.048438 master-0 kubenswrapper[26425]: I0217 15:48:48.047894 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:48.052810 master-0 kubenswrapper[26425]: I0217 15:48:48.051718 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104593 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104691 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104711 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104792 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104850 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104934 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxw7t\" (UniqueName: \"kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.106946 master-0 kubenswrapper[26425]: I0217 15:48:48.104967 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.111713 master-0 kubenswrapper[26425]: I0217 15:48:48.111657 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:48.115105 master-0 kubenswrapper[26425]: I0217 15:48:48.115061 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\" (UID: \"ba50f35d-07b5-4db9-bc46-3ffeb03f3902\") " Feb 17 15:48:48.116210 master-0 kubenswrapper[26425]: I0217 15:48:48.116099 26425 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.117868 master-0 kubenswrapper[26425]: I0217 15:48:48.117825 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs" (OuterVolumeSpecName: "logs") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:48:48.124760 master-0 kubenswrapper[26425]: I0217 15:48:48.124704 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t" (OuterVolumeSpecName: "kube-api-access-sxw7t") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "kube-api-access-sxw7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:48:48.132225 master-0 kubenswrapper[26425]: I0217 15:48:48.128184 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts" (OuterVolumeSpecName: "scripts") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:48.170946 master-0 kubenswrapper[26425]: I0217 15:48:48.167014 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:48.218894 master-0 kubenswrapper[26425]: I0217 15:48:48.218862 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.218999 master-0 kubenswrapper[26425]: I0217 15:48:48.218900 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.218999 master-0 kubenswrapper[26425]: I0217 15:48:48.218912 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.218999 master-0 kubenswrapper[26425]: I0217 15:48:48.218926 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxw7t\" (UniqueName: \"kubernetes.io/projected/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-kube-api-access-sxw7t\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.245652 master-0 kubenswrapper[26425]: I0217 15:48:48.241657 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:48.245652 master-0 kubenswrapper[26425]: I0217 15:48:48.244534 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data" (OuterVolumeSpecName: "config-data") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:48:48.321182 master-0 kubenswrapper[26425]: I0217 15:48:48.321128 26425 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.321182 master-0 kubenswrapper[26425]: I0217 15:48:48.321179 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba50f35d-07b5-4db9-bc46-3ffeb03f3902-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.420257 master-0 kubenswrapper[26425]: I0217 15:48:48.407414 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae" (OuterVolumeSpecName: "glance") pod "ba50f35d-07b5-4db9-bc46-3ffeb03f3902" (UID: "ba50f35d-07b5-4db9-bc46-3ffeb03f3902"). InnerVolumeSpecName "pvc-a034608b-53d3-45d8-84b2-146bea988703". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:48:48.443597 master-0 kubenswrapper[26425]: I0217 15:48:48.441907 26425 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") on node \"master-0\" " Feb 17 15:48:48.464913 master-0 kubenswrapper[26425]: I0217 15:48:48.464675 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d9ac85d2-8903-4fd7-b6eb-24054ea7881c\") pod \"glance-7b9c2-default-external-api-0\" (UID: \"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2\") " pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:48.483935 master-0 kubenswrapper[26425]: I0217 15:48:48.483671 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" path="/var/lib/kubelet/pods/3d5a2ac6-930f-43d0-873f-3bd2cc9df572/volumes" Feb 17 15:48:48.486747 master-0 kubenswrapper[26425]: I0217 15:48:48.485222 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:48:48.609120 master-0 kubenswrapper[26425]: I0217 15:48:48.609060 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:48.622985 master-0 kubenswrapper[26425]: I0217 15:48:48.622923 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" event={"ID":"644f188a-ec83-482d-8c99-4da13cfc19e3","Type":"ContainerStarted","Data":"806b82ee893ed3b8f67577203f56cf9cb8d57e73a1c9392d332c05514581de4d"} Feb 17 15:48:48.634718 master-0 kubenswrapper[26425]: I0217 15:48:48.633989 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"8a9973a7ff3bb106d664be46a709e58a5e474e75a7df17f0390455d398f7d950"} Feb 17 15:48:48.670827 master-0 kubenswrapper[26425]: I0217 15:48:48.670782 26425 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 15:48:48.671085 master-0 kubenswrapper[26425]: I0217 15:48:48.671056 26425 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a034608b-53d3-45d8-84b2-146bea988703" (UniqueName: "kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae") on node "master-0" Feb 17 15:48:48.677910 master-0 kubenswrapper[26425]: I0217 15:48:48.672692 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"ba50f35d-07b5-4db9-bc46-3ffeb03f3902","Type":"ContainerDied","Data":"e3452447bdf57daf70816359d34b56fa1f99da267f5cdb153dd0b14114deaf8b"} Feb 17 15:48:48.677910 master-0 kubenswrapper[26425]: I0217 15:48:48.672750 26425 scope.go:117] "RemoveContainer" containerID="3b3ebec1c2e6e4204d4e1cecb8899d580c3baf1dd7f05ccef4f4a4a27dd8fd3d" Feb 17 15:48:48.677910 master-0 kubenswrapper[26425]: I0217 15:48:48.673124 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:48.773625 master-0 kubenswrapper[26425]: I0217 15:48:48.773566 26425 reconciler_common.go:293] "Volume detached for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") on node \"master-0\" DevicePath \"\"" Feb 17 15:48:48.808311 master-0 kubenswrapper[26425]: I0217 15:48:48.808213 26425 scope.go:117] "RemoveContainer" containerID="15801b47cfb7a0d53554af977658ddff8f9471db68d684526a5ea6cd4d82e176" Feb 17 15:48:48.845159 master-0 kubenswrapper[26425]: W0217 15:48:48.845120 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62ef05a2_b338_4a41_9b86_147f7dd1e242.slice/crio-7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc WatchSource:0}: Error finding container 7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc: Status 404 returned error can't find the container with id 7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc Feb 17 15:48:48.853354 master-0 kubenswrapper[26425]: I0217 15:48:48.853294 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:48:48.875514 master-0 kubenswrapper[26425]: I0217 15:48:48.874858 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:48.900423 master-0 kubenswrapper[26425]: I0217 15:48:48.900342 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gbxf"] Feb 17 15:48:48.918225 master-0 kubenswrapper[26425]: I0217 15:48:48.918152 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:49.125749 master-0 kubenswrapper[26425]: I0217 15:48:49.125544 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:49.126361 master-0 kubenswrapper[26425]: E0217 15:48:49.126327 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-httpd" Feb 17 15:48:49.126361 master-0 kubenswrapper[26425]: I0217 15:48:49.126361 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-httpd" Feb 17 15:48:49.126515 master-0 kubenswrapper[26425]: E0217 15:48:49.126405 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" Feb 17 15:48:49.126515 master-0 kubenswrapper[26425]: I0217 15:48:49.126414 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" Feb 17 15:48:49.126790 master-0 kubenswrapper[26425]: I0217 15:48:49.126749 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-log" Feb 17 15:48:49.128412 master-0 kubenswrapper[26425]: I0217 15:48:49.127846 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" containerName="glance-httpd" Feb 17 15:48:49.129745 master-0 kubenswrapper[26425]: I0217 15:48:49.129526 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.134074 master-0 kubenswrapper[26425]: I0217 15:48:49.134014 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-7b9c2-default-internal-config-data" Feb 17 15:48:49.134914 master-0 kubenswrapper[26425]: I0217 15:48:49.134873 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 15:48:49.179054 master-0 kubenswrapper[26425]: I0217 15:48:49.178988 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:49.295789 master-0 kubenswrapper[26425]: I0217 15:48:49.295721 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296022 master-0 kubenswrapper[26425]: I0217 15:48:49.295836 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296022 master-0 kubenswrapper[26425]: I0217 15:48:49.295859 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296022 master-0 kubenswrapper[26425]: I0217 15:48:49.295908 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296197 master-0 kubenswrapper[26425]: I0217 15:48:49.296162 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296307 master-0 kubenswrapper[26425]: I0217 15:48:49.296278 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296362 master-0 kubenswrapper[26425]: I0217 15:48:49.296304 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc227\" (UniqueName: \"kubernetes.io/projected/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-kube-api-access-hc227\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.296362 master-0 kubenswrapper[26425]: I0217 15:48:49.296344 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.315870 master-0 kubenswrapper[26425]: I0217 15:48:49.315556 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-external-api-0"] Feb 17 15:48:49.398479 master-0 kubenswrapper[26425]: I0217 15:48:49.398414 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.399748 master-0 kubenswrapper[26425]: I0217 15:48:49.399690 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc227\" (UniqueName: \"kubernetes.io/projected/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-kube-api-access-hc227\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.399817 master-0 kubenswrapper[26425]: I0217 15:48:49.399779 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.399958 master-0 kubenswrapper[26425]: I0217 15:48:49.399925 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.400028 master-0 kubenswrapper[26425]: I0217 15:48:49.400008 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.400102 master-0 kubenswrapper[26425]: I0217 15:48:49.400033 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.400102 master-0 kubenswrapper[26425]: I0217 15:48:49.400094 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.400184 master-0 kubenswrapper[26425]: I0217 15:48:49.400169 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.400788 master-0 kubenswrapper[26425]: I0217 15:48:49.400749 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-httpd-run\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.402319 master-0 kubenswrapper[26425]: I0217 15:48:49.402290 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-scripts\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.403977 master-0 kubenswrapper[26425]: I0217 15:48:49.402761 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-logs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.405627 master-0 kubenswrapper[26425]: I0217 15:48:49.405158 26425 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:48:49.405627 master-0 kubenswrapper[26425]: I0217 15:48:49.405222 26425 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0067ec78290aaf5ed99b46ed47c7cab15903d0f50e4c317ca4663ebd33bb5b9a/globalmount\"" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.406577 master-0 kubenswrapper[26425]: I0217 15:48:49.406168 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-config-data\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.417200 master-0 kubenswrapper[26425]: I0217 15:48:49.417136 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-internal-tls-certs\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.423038 master-0 kubenswrapper[26425]: I0217 15:48:49.423004 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-combined-ca-bundle\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.423489 master-0 kubenswrapper[26425]: I0217 15:48:49.423430 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc227\" (UniqueName: \"kubernetes.io/projected/f0a58f9b-b9fe-49ef-ba93-eadf411f6320-kube-api-access-hc227\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:49.697009 master-0 kubenswrapper[26425]: I0217 15:48:49.696952 26425 generic.go:334] "Generic (PLEG): container finished" podID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerID="92178e5b6355c9585ba7d5edef43f9ac99142846a8de5573fe5ce0e0658e94e3" exitCode=0 Feb 17 15:48:49.698720 master-0 kubenswrapper[26425]: I0217 15:48:49.697009 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" event={"ID":"644f188a-ec83-482d-8c99-4da13cfc19e3","Type":"ContainerDied","Data":"92178e5b6355c9585ba7d5edef43f9ac99142846a8de5573fe5ce0e0658e94e3"} Feb 17 15:48:49.700179 master-0 kubenswrapper[26425]: I0217 15:48:49.700132 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2","Type":"ContainerStarted","Data":"3488746a441e19648080c696df71cee91bd07080fed386eb103335dda6a49312"} Feb 17 15:48:49.704865 master-0 kubenswrapper[26425]: I0217 15:48:49.704819 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5" exitCode=0 Feb 17 15:48:49.705052 master-0 kubenswrapper[26425]: I0217 15:48:49.704905 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5"} Feb 17 15:48:49.705052 master-0 kubenswrapper[26425]: I0217 15:48:49.704933 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"060c08053f0ce8ad282bc67713854928358ded3f51ec3f835383286295b68192"} Feb 17 15:48:49.708858 master-0 kubenswrapper[26425]: I0217 15:48:49.708802 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" event={"ID":"62ef05a2-b338-4a41-9b86-147f7dd1e242","Type":"ContainerStarted","Data":"7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc"} Feb 17 15:48:50.287826 master-0 kubenswrapper[26425]: I0217 15:48:50.287709 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a034608b-53d3-45d8-84b2-146bea988703\" (UniqueName: \"kubernetes.io/csi/topolvm.io^12c77599-e1b5-4cea-b05c-d638c506cfae\") pod \"glance-7b9c2-default-internal-api-0\" (UID: \"f0a58f9b-b9fe-49ef-ba93-eadf411f6320\") " pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:50.406536 master-0 kubenswrapper[26425]: I0217 15:48:50.400335 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:48:50.433529 master-0 kubenswrapper[26425]: I0217 15:48:50.430157 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba50f35d-07b5-4db9-bc46-3ffeb03f3902" path="/var/lib/kubelet/pods/ba50f35d-07b5-4db9-bc46-3ffeb03f3902/volumes" Feb 17 15:48:50.730825 master-0 kubenswrapper[26425]: I0217 15:48:50.730756 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2","Type":"ContainerStarted","Data":"af30a5a53826fc0c99a4a95da92b620358953c669aabfafd09bf3107939b01d0"} Feb 17 15:48:50.734241 master-0 kubenswrapper[26425]: I0217 15:48:50.733877 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4"} Feb 17 15:48:50.736263 master-0 kubenswrapper[26425]: I0217 15:48:50.736191 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" event={"ID":"644f188a-ec83-482d-8c99-4da13cfc19e3","Type":"ContainerStarted","Data":"5cadc85396378e7071c4b32c991e870e2e58a70e19c2efe0f7a4d45097ed21e7"} Feb 17 15:48:50.736428 master-0 kubenswrapper[26425]: I0217 15:48:50.736384 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:50.923531 master-0 kubenswrapper[26425]: I0217 15:48:50.922792 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" podStartSLOduration=3.921894742 podStartE2EDuration="3.921894742s" podCreationTimestamp="2026-02-17 15:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:50.912041095 +0000 UTC m=+1992.803764913" watchObservedRunningTime="2026-02-17 15:48:50.921894742 +0000 UTC m=+1992.813618560" Feb 17 15:48:51.331772 master-0 kubenswrapper[26425]: I0217 15:48:51.331694 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b9c2-default-internal-api-0"] Feb 17 15:48:51.751939 master-0 kubenswrapper[26425]: I0217 15:48:51.751268 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"f0a58f9b-b9fe-49ef-ba93-eadf411f6320","Type":"ContainerStarted","Data":"cc33744bc4619549cec86e87f55defbdc5995653c225d46d33401dbef401399b"} Feb 17 15:48:51.754317 master-0 kubenswrapper[26425]: I0217 15:48:51.754282 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-external-api-0" event={"ID":"7d7b56fa-329f-4d45-9b44-7fd2a0b28ea2","Type":"ContainerStarted","Data":"0eff0be4cd28281bfb6597e2e62a781c565bb1ada26a6944fab69b14b153c3b1"} Feb 17 15:48:51.758101 master-0 kubenswrapper[26425]: I0217 15:48:51.758058 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4" exitCode=0 Feb 17 15:48:51.758185 master-0 kubenswrapper[26425]: I0217 15:48:51.758121 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4"} Feb 17 15:48:52.772966 master-0 kubenswrapper[26425]: I0217 15:48:52.772895 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"f0a58f9b-b9fe-49ef-ba93-eadf411f6320","Type":"ContainerStarted","Data":"675786ea14be3d41cb6f3ed3f9c00b9374a240b0e1b6b605446ccc762d00c558"} Feb 17 15:48:54.733575 master-0 kubenswrapper[26425]: I0217 15:48:54.724212 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7b9c2-default-external-api-0" podStartSLOduration=8.724191947 podStartE2EDuration="8.724191947s" podCreationTimestamp="2026-02-17 15:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:54.717782153 +0000 UTC m=+1996.609505991" watchObservedRunningTime="2026-02-17 15:48:54.724191947 +0000 UTC m=+1996.615915765" Feb 17 15:48:54.804897 master-0 kubenswrapper[26425]: I0217 15:48:54.804781 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2"} Feb 17 15:48:55.822969 master-0 kubenswrapper[26425]: I0217 15:48:55.822842 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b9c2-default-internal-api-0" event={"ID":"f0a58f9b-b9fe-49ef-ba93-eadf411f6320","Type":"ContainerStarted","Data":"8e9ded32d3a1bd7bb77f887a6d7244cf3a3d9d53d4aca86dfbc2898d97569ed2"} Feb 17 15:48:57.878538 master-0 kubenswrapper[26425]: I0217 15:48:57.878484 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:48:58.611891 master-0 kubenswrapper[26425]: I0217 15:48:58.610969 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.614317 master-0 kubenswrapper[26425]: I0217 15:48:58.613956 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.665625 master-0 kubenswrapper[26425]: I0217 15:48:58.665561 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.687723 master-0 kubenswrapper[26425]: I0217 15:48:58.687653 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.862635 master-0 kubenswrapper[26425]: I0217 15:48:58.862513 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.862635 master-0 kubenswrapper[26425]: I0217 15:48:58.862556 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:48:58.980840 master-0 kubenswrapper[26425]: I0217 15:48:58.980714 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7b9c2-default-internal-api-0" podStartSLOduration=10.980689065 podStartE2EDuration="10.980689065s" podCreationTimestamp="2026-02-17 15:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:48:58.973209605 +0000 UTC m=+2000.864933423" watchObservedRunningTime="2026-02-17 15:48:58.980689065 +0000 UTC m=+2000.872412873" Feb 17 15:48:59.299554 master-0 kubenswrapper[26425]: I0217 15:48:59.299447 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:48:59.299904 master-0 kubenswrapper[26425]: I0217 15:48:59.299860 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="dnsmasq-dns" containerID="cri-o://6ada8af73075365b531c9b92a3e2d0ef7c549082e6f6f04db4f79bd471468556" gracePeriod=10 Feb 17 15:48:59.883234 master-0 kubenswrapper[26425]: I0217 15:48:59.883165 26425 generic.go:334] "Generic (PLEG): container finished" podID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerID="6ada8af73075365b531c9b92a3e2d0ef7c549082e6f6f04db4f79bd471468556" exitCode=0 Feb 17 15:48:59.883478 master-0 kubenswrapper[26425]: I0217 15:48:59.883317 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" event={"ID":"5af5f023-f51c-448d-9df7-d4e9ec69ca7e","Type":"ContainerDied","Data":"6ada8af73075365b531c9b92a3e2d0ef7c549082e6f6f04db4f79bd471468556"} Feb 17 15:49:00.435789 master-0 kubenswrapper[26425]: I0217 15:49:00.435674 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.435789 master-0 kubenswrapper[26425]: I0217 15:49:00.435730 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.479481 master-0 kubenswrapper[26425]: I0217 15:49:00.476431 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.482509 master-0 kubenswrapper[26425]: I0217 15:49:00.482132 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.561384 master-0 kubenswrapper[26425]: I0217 15:49:00.561346 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:49:00.895208 master-0 kubenswrapper[26425]: I0217 15:49:00.895147 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f"} Feb 17 15:49:00.903273 master-0 kubenswrapper[26425]: I0217 15:49:00.903240 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:49:00.903273 master-0 kubenswrapper[26425]: I0217 15:49:00.903267 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:49:00.903953 master-0 kubenswrapper[26425]: I0217 15:49:00.903764 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" event={"ID":"5af5f023-f51c-448d-9df7-d4e9ec69ca7e","Type":"ContainerDied","Data":"3e3eb754ec6736e360bb3284f6665d84bdaca1a1fe0e55b6c671b428bf38e288"} Feb 17 15:49:00.903953 master-0 kubenswrapper[26425]: I0217 15:49:00.903816 26425 scope.go:117] "RemoveContainer" containerID="6ada8af73075365b531c9b92a3e2d0ef7c549082e6f6f04db4f79bd471468556" Feb 17 15:49:00.904415 master-0 kubenswrapper[26425]: I0217 15:49:00.904327 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-d9zgc" Feb 17 15:49:00.904857 master-0 kubenswrapper[26425]: I0217 15:49:00.904800 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.904967 master-0 kubenswrapper[26425]: I0217 15:49:00.904945 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:00.952325 master-0 kubenswrapper[26425]: I0217 15:49:00.952194 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:49:01.070324 master-0 kubenswrapper[26425]: I0217 15:49:01.070083 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-external-api-0" Feb 17 15:49:01.151743 master-0 kubenswrapper[26425]: I0217 15:49:01.151596 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.151931 master-0 kubenswrapper[26425]: I0217 15:49:01.151792 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m49dz\" (UniqueName: \"kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.151931 master-0 kubenswrapper[26425]: I0217 15:49:01.151898 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.152477 master-0 kubenswrapper[26425]: I0217 15:49:01.152030 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.152477 master-0 kubenswrapper[26425]: I0217 15:49:01.152070 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.152477 master-0 kubenswrapper[26425]: I0217 15:49:01.152104 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb\") pod \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\" (UID: \"5af5f023-f51c-448d-9df7-d4e9ec69ca7e\") " Feb 17 15:49:01.158678 master-0 kubenswrapper[26425]: I0217 15:49:01.158615 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz" (OuterVolumeSpecName: "kube-api-access-m49dz") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "kube-api-access-m49dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:01.237493 master-0 kubenswrapper[26425]: I0217 15:49:01.234495 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:01.251508 master-0 kubenswrapper[26425]: I0217 15:49:01.245905 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config" (OuterVolumeSpecName: "config") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:01.255051 master-0 kubenswrapper[26425]: I0217 15:49:01.253098 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:01.257043 master-0 kubenswrapper[26425]: I0217 15:49:01.256774 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m49dz\" (UniqueName: \"kubernetes.io/projected/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-kube-api-access-m49dz\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.257043 master-0 kubenswrapper[26425]: I0217 15:49:01.256816 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.257043 master-0 kubenswrapper[26425]: I0217 15:49:01.256829 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.257043 master-0 kubenswrapper[26425]: I0217 15:49:01.256843 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.269936 master-0 kubenswrapper[26425]: I0217 15:49:01.263001 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:01.269936 master-0 kubenswrapper[26425]: I0217 15:49:01.264037 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5af5f023-f51c-448d-9df7-d4e9ec69ca7e" (UID: "5af5f023-f51c-448d-9df7-d4e9ec69ca7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:01.359625 master-0 kubenswrapper[26425]: I0217 15:49:01.359562 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.360014 master-0 kubenswrapper[26425]: I0217 15:49:01.360000 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5af5f023-f51c-448d-9df7-d4e9ec69ca7e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:01.446170 master-0 kubenswrapper[26425]: I0217 15:49:01.445713 26425 scope.go:117] "RemoveContainer" containerID="f13447e3e0c1f00a8eef7a0f6e5dee58961401584a0e92e2d945179fa0f56c49" Feb 17 15:49:01.950069 master-0 kubenswrapper[26425]: I0217 15:49:01.949940 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362"} Feb 17 15:49:01.963262 master-0 kubenswrapper[26425]: I0217 15:49:01.963195 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" event={"ID":"62ef05a2-b338-4a41-9b86-147f7dd1e242","Type":"ContainerStarted","Data":"2178fd764ff761480f789a27e868dfa3b752f5be6338e55bf282759a55624309"} Feb 17 15:49:02.302254 master-0 kubenswrapper[26425]: I0217 15:49:02.302186 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:49:02.334168 master-0 kubenswrapper[26425]: I0217 15:49:02.334109 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-d9zgc"] Feb 17 15:49:02.431130 master-0 kubenswrapper[26425]: I0217 15:49:02.431045 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" path="/var/lib/kubelet/pods/5af5f023-f51c-448d-9df7-d4e9ec69ca7e/volumes" Feb 17 15:49:02.522490 master-0 kubenswrapper[26425]: I0217 15:49:02.522370 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" podStartSLOduration=2.8057131440000003 podStartE2EDuration="15.522344839s" podCreationTimestamp="2026-02-17 15:48:47 +0000 UTC" firstStartedPulling="2026-02-17 15:48:48.882079574 +0000 UTC m=+1990.773803392" lastFinishedPulling="2026-02-17 15:49:01.598711269 +0000 UTC m=+2003.490435087" observedRunningTime="2026-02-17 15:49:02.51112681 +0000 UTC m=+2004.402850628" watchObservedRunningTime="2026-02-17 15:49:02.522344839 +0000 UTC m=+2004.414068647" Feb 17 15:49:02.983283 master-0 kubenswrapper[26425]: I0217 15:49:02.983191 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:49:02.983283 master-0 kubenswrapper[26425]: I0217 15:49:02.983245 26425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:49:02.983753 master-0 kubenswrapper[26425]: I0217 15:49:02.983298 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1"} Feb 17 15:49:03.179781 master-0 kubenswrapper[26425]: I0217 15:49:03.179707 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:03.181165 master-0 kubenswrapper[26425]: I0217 15:49:03.181128 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-7b9c2-default-internal-api-0" Feb 17 15:49:04.011331 master-0 kubenswrapper[26425]: I0217 15:49:04.011208 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerStarted","Data":"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09"} Feb 17 15:49:05.032139 master-0 kubenswrapper[26425]: I0217 15:49:05.032047 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:05.033274 master-0 kubenswrapper[26425]: I0217 15:49:05.033232 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:06.052866 master-0 kubenswrapper[26425]: I0217 15:49:06.052765 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:06.090433 master-0 kubenswrapper[26425]: I0217 15:49:06.090345 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:06.488479 master-0 kubenswrapper[26425]: I0217 15:49:06.488327 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=19.488302359 podStartE2EDuration="19.488302359s" podCreationTimestamp="2026-02-17 15:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:06.478159226 +0000 UTC m=+2008.369883064" watchObservedRunningTime="2026-02-17 15:49:06.488302359 +0000 UTC m=+2008.380026187" Feb 17 15:49:07.938375 master-0 kubenswrapper[26425]: I0217 15:49:07.938320 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 17 15:49:07.938375 master-0 kubenswrapper[26425]: I0217 15:49:07.938369 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:07.938375 master-0 kubenswrapper[26425]: I0217 15:49:07.938384 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 17 15:49:07.939083 master-0 kubenswrapper[26425]: I0217 15:49:07.938420 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:07.975236 master-0 kubenswrapper[26425]: I0217 15:49:07.975163 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 17 15:49:07.979219 master-0 kubenswrapper[26425]: I0217 15:49:07.979170 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 17 15:49:08.089588 master-0 kubenswrapper[26425]: I0217 15:49:08.089529 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:08.091766 master-0 kubenswrapper[26425]: I0217 15:49:08.091733 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:15.083869 master-0 kubenswrapper[26425]: I0217 15:49:15.083776 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:15.084720 master-0 kubenswrapper[26425]: I0217 15:49:15.084135 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector-httpd" containerID="cri-o://5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2" gracePeriod=60 Feb 17 15:49:15.084720 master-0 kubenswrapper[26425]: I0217 15:49:15.084255 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-dnsmasq" containerID="cri-o://359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09" gracePeriod=60 Feb 17 15:49:15.084720 master-0 kubenswrapper[26425]: I0217 15:49:15.084293 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector" containerID="cri-o://6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f" gracePeriod=60 Feb 17 15:49:15.084720 master-0 kubenswrapper[26425]: I0217 15:49:15.084364 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-httpboot" containerID="cri-o://d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362" gracePeriod=60 Feb 17 15:49:15.126899 master-0 kubenswrapper[26425]: I0217 15:49:15.126809 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ramdisk-logs" containerID="cri-o://7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1" gracePeriod=60 Feb 17 15:49:15.210694 master-0 kubenswrapper[26425]: I0217 15:49:15.210549 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1" exitCode=0 Feb 17 15:49:15.210694 master-0 kubenswrapper[26425]: I0217 15:49:15.210609 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1"} Feb 17 15:49:15.217905 master-0 kubenswrapper[26425]: E0217 15:49:15.217843 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8717e84_c9b5_4eff_9221_13fb96fac595.slice/crio-359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8717e84_c9b5_4eff_9221_13fb96fac595.slice/crio-conmon-359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8717e84_c9b5_4eff_9221_13fb96fac595.slice/crio-7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:49:15.218080 master-0 kubenswrapper[26425]: E0217 15:49:15.217990 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8717e84_c9b5_4eff_9221_13fb96fac595.slice/crio-conmon-7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8717e84_c9b5_4eff_9221_13fb96fac595.slice/crio-conmon-359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:49:16.119753 master-0 kubenswrapper[26425]: I0217 15:49:16.119635 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5c5cd8d-bjbtl" podUID="3d5a2ac6-930f-43d0-873f-3bd2cc9df572" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.128.0.231:9696/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:49:16.230597 master-0 kubenswrapper[26425]: I0217 15:49:16.230536 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09" exitCode=0 Feb 17 15:49:16.230597 master-0 kubenswrapper[26425]: I0217 15:49:16.230581 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362" exitCode=0 Feb 17 15:49:16.230597 master-0 kubenswrapper[26425]: I0217 15:49:16.230590 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2" exitCode=0 Feb 17 15:49:16.230898 master-0 kubenswrapper[26425]: I0217 15:49:16.230641 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09"} Feb 17 15:49:16.230898 master-0 kubenswrapper[26425]: I0217 15:49:16.230740 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362"} Feb 17 15:49:16.230898 master-0 kubenswrapper[26425]: I0217 15:49:16.230760 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2"} Feb 17 15:49:17.023271 master-0 kubenswrapper[26425]: I0217 15:49:17.023209 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:49:17.129434 master-0 kubenswrapper[26425]: I0217 15:49:17.129371 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzkwv\" (UniqueName: \"kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.129434 master-0 kubenswrapper[26425]: I0217 15:49:17.129431 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.129469 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.129508 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.129625 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.129762 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.129897 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts\") pod \"f8717e84-c9b5-4eff-9221-13fb96fac595\" (UID: \"f8717e84-c9b5-4eff-9221-13fb96fac595\") " Feb 17 15:49:17.130088 master-0 kubenswrapper[26425]: I0217 15:49:17.130046 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:49:17.130682 master-0 kubenswrapper[26425]: I0217 15:49:17.130627 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.132480 master-0 kubenswrapper[26425]: I0217 15:49:17.132411 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:49:17.133075 master-0 kubenswrapper[26425]: I0217 15:49:17.133036 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 15:49:17.135292 master-0 kubenswrapper[26425]: I0217 15:49:17.135246 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts" (OuterVolumeSpecName: "scripts") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:17.153544 master-0 kubenswrapper[26425]: I0217 15:49:17.153471 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv" (OuterVolumeSpecName: "kube-api-access-kzkwv") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "kube-api-access-kzkwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:17.236524 master-0 kubenswrapper[26425]: I0217 15:49:17.235255 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.236524 master-0 kubenswrapper[26425]: I0217 15:49:17.235314 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzkwv\" (UniqueName: \"kubernetes.io/projected/f8717e84-c9b5-4eff-9221-13fb96fac595-kube-api-access-kzkwv\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.236524 master-0 kubenswrapper[26425]: I0217 15:49:17.235333 26425 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f8717e84-c9b5-4eff-9221-13fb96fac595-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.236524 master-0 kubenswrapper[26425]: I0217 15:49:17.235346 26425 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f8717e84-c9b5-4eff-9221-13fb96fac595-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.265712 master-0 kubenswrapper[26425]: I0217 15:49:17.257774 26425 generic.go:334] "Generic (PLEG): container finished" podID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerID="6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f" exitCode=0 Feb 17 15:49:17.265712 master-0 kubenswrapper[26425]: I0217 15:49:17.257877 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f"} Feb 17 15:49:17.265712 master-0 kubenswrapper[26425]: I0217 15:49:17.257938 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f8717e84-c9b5-4eff-9221-13fb96fac595","Type":"ContainerDied","Data":"060c08053f0ce8ad282bc67713854928358ded3f51ec3f835383286295b68192"} Feb 17 15:49:17.265712 master-0 kubenswrapper[26425]: I0217 15:49:17.257965 26425 scope.go:117] "RemoveContainer" containerID="359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09" Feb 17 15:49:17.265712 master-0 kubenswrapper[26425]: I0217 15:49:17.258278 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:49:17.292938 master-0 kubenswrapper[26425]: I0217 15:49:17.292880 26425 scope.go:117] "RemoveContainer" containerID="7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1" Feb 17 15:49:17.325537 master-0 kubenswrapper[26425]: I0217 15:49:17.325472 26425 scope.go:117] "RemoveContainer" containerID="d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362" Feb 17 15:49:17.346745 master-0 kubenswrapper[26425]: I0217 15:49:17.346672 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config" (OuterVolumeSpecName: "config") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:17.356684 master-0 kubenswrapper[26425]: I0217 15:49:17.356607 26425 scope.go:117] "RemoveContainer" containerID="6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f" Feb 17 15:49:17.382793 master-0 kubenswrapper[26425]: I0217 15:49:17.382636 26425 scope.go:117] "RemoveContainer" containerID="5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2" Feb 17 15:49:17.400797 master-0 kubenswrapper[26425]: I0217 15:49:17.400735 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8717e84-c9b5-4eff-9221-13fb96fac595" (UID: "f8717e84-c9b5-4eff-9221-13fb96fac595"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:17.409115 master-0 kubenswrapper[26425]: I0217 15:49:17.408993 26425 scope.go:117] "RemoveContainer" containerID="ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4" Feb 17 15:49:17.443093 master-0 kubenswrapper[26425]: I0217 15:49:17.442949 26425 scope.go:117] "RemoveContainer" containerID="36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5" Feb 17 15:49:17.443405 master-0 kubenswrapper[26425]: I0217 15:49:17.443107 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.446723 master-0 kubenswrapper[26425]: I0217 15:49:17.443482 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8717e84-c9b5-4eff-9221-13fb96fac595-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: I0217 15:49:17.499518 26425 scope.go:117] "RemoveContainer" containerID="359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: E0217 15:49:17.500049 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09\": container with ID starting with 359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09 not found: ID does not exist" containerID="359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: I0217 15:49:17.500111 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09"} err="failed to get container status \"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09\": rpc error: code = NotFound desc = could not find container \"359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09\": container with ID starting with 359d9a74e030e5fda8065c0568c1ff3d2c29b2962a9dcfb3de85a1973a678a09 not found: ID does not exist" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: I0217 15:49:17.500145 26425 scope.go:117] "RemoveContainer" containerID="7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: E0217 15:49:17.500491 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1\": container with ID starting with 7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1 not found: ID does not exist" containerID="7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: I0217 15:49:17.500526 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1"} err="failed to get container status \"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1\": rpc error: code = NotFound desc = could not find container \"7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1\": container with ID starting with 7541796015f7c5871019af7753c35e1a724d06ca1176c0d56f3bc20af06df2a1 not found: ID does not exist" Feb 17 15:49:17.500632 master-0 kubenswrapper[26425]: I0217 15:49:17.500547 26425 scope.go:117] "RemoveContainer" containerID="d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362" Feb 17 15:49:17.500940 master-0 kubenswrapper[26425]: E0217 15:49:17.500809 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362\": container with ID starting with d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362 not found: ID does not exist" containerID="d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362" Feb 17 15:49:17.500940 master-0 kubenswrapper[26425]: I0217 15:49:17.500832 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362"} err="failed to get container status \"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362\": rpc error: code = NotFound desc = could not find container \"d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362\": container with ID starting with d23f76f22afdbd342efe0fde2d541c665a8ec8086058be0f335cc973aff81362 not found: ID does not exist" Feb 17 15:49:17.500940 master-0 kubenswrapper[26425]: I0217 15:49:17.500848 26425 scope.go:117] "RemoveContainer" containerID="6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f" Feb 17 15:49:17.501095 master-0 kubenswrapper[26425]: E0217 15:49:17.501047 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f\": container with ID starting with 6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f not found: ID does not exist" containerID="6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f" Feb 17 15:49:17.501095 master-0 kubenswrapper[26425]: I0217 15:49:17.501076 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f"} err="failed to get container status \"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f\": rpc error: code = NotFound desc = could not find container \"6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f\": container with ID starting with 6d197d03d437337854d389c5e3d5ac912eac3a3cce4b0c2e7cd94bc934018d7f not found: ID does not exist" Feb 17 15:49:17.501095 master-0 kubenswrapper[26425]: I0217 15:49:17.501088 26425 scope.go:117] "RemoveContainer" containerID="5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2" Feb 17 15:49:17.501587 master-0 kubenswrapper[26425]: E0217 15:49:17.501383 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2\": container with ID starting with 5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2 not found: ID does not exist" containerID="5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2" Feb 17 15:49:17.501587 master-0 kubenswrapper[26425]: I0217 15:49:17.501433 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2"} err="failed to get container status \"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2\": rpc error: code = NotFound desc = could not find container \"5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2\": container with ID starting with 5a3b8e1e2542f10041b6fc51a2644cc53ab2356141237a3a40731f12bd9ee0b2 not found: ID does not exist" Feb 17 15:49:17.501587 master-0 kubenswrapper[26425]: I0217 15:49:17.501488 26425 scope.go:117] "RemoveContainer" containerID="ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4" Feb 17 15:49:17.501748 master-0 kubenswrapper[26425]: E0217 15:49:17.501722 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4\": container with ID starting with ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4 not found: ID does not exist" containerID="ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4" Feb 17 15:49:17.501796 master-0 kubenswrapper[26425]: I0217 15:49:17.501749 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4"} err="failed to get container status \"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4\": rpc error: code = NotFound desc = could not find container \"ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4\": container with ID starting with ba49ab38278e044c6e411df83a4248cf404c880159f709711dee928409e505a4 not found: ID does not exist" Feb 17 15:49:17.501796 master-0 kubenswrapper[26425]: I0217 15:49:17.501764 26425 scope.go:117] "RemoveContainer" containerID="36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5" Feb 17 15:49:17.501993 master-0 kubenswrapper[26425]: E0217 15:49:17.501955 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5\": container with ID starting with 36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5 not found: ID does not exist" containerID="36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5" Feb 17 15:49:17.502043 master-0 kubenswrapper[26425]: I0217 15:49:17.501990 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5"} err="failed to get container status \"36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5\": rpc error: code = NotFound desc = could not find container \"36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5\": container with ID starting with 36525ec258724fc1fb388c70c42115ae6d8db2e0a0b7bbaadcf803ac9a68e7a5 not found: ID does not exist" Feb 17 15:49:17.605147 master-0 kubenswrapper[26425]: I0217 15:49:17.605042 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:17.615389 master-0 kubenswrapper[26425]: I0217 15:49:17.615241 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:17.666264 master-0 kubenswrapper[26425]: I0217 15:49:17.666124 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:17.667022 master-0 kubenswrapper[26425]: E0217 15:49:17.666981 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="init" Feb 17 15:49:17.667022 master-0 kubenswrapper[26425]: I0217 15:49:17.667011 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="init" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: E0217 15:49:17.667051 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-pxe-init" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: I0217 15:49:17.667062 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-pxe-init" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: E0217 15:49:17.667087 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-httpboot" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: I0217 15:49:17.667096 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-httpboot" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: E0217 15:49:17.667114 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ramdisk-logs" Feb 17 15:49:17.667153 master-0 kubenswrapper[26425]: I0217 15:49:17.667125 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ramdisk-logs" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: E0217 15:49:17.667152 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: I0217 15:49:17.667181 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: E0217 15:49:17.667208 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector-httpd" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: I0217 15:49:17.667218 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector-httpd" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: E0217 15:49:17.667230 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-dnsmasq" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: I0217 15:49:17.667239 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-dnsmasq" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: E0217 15:49:17.667260 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-python-agent-init" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: I0217 15:49:17.667269 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-python-agent-init" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: E0217 15:49:17.667302 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="dnsmasq-dns" Feb 17 15:49:17.667406 master-0 kubenswrapper[26425]: I0217 15:49:17.667310 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="dnsmasq-dns" Feb 17 15:49:17.667741 master-0 kubenswrapper[26425]: I0217 15:49:17.667647 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af5f023-f51c-448d-9df7-d4e9ec69ca7e" containerName="dnsmasq-dns" Feb 17 15:49:17.667741 master-0 kubenswrapper[26425]: I0217 15:49:17.667738 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ramdisk-logs" Feb 17 15:49:17.667809 master-0 kubenswrapper[26425]: I0217 15:49:17.667775 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-httpboot" Feb 17 15:49:17.667839 master-0 kubenswrapper[26425]: I0217 15:49:17.667812 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector-httpd" Feb 17 15:49:17.667884 master-0 kubenswrapper[26425]: I0217 15:49:17.667863 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="ironic-inspector" Feb 17 15:49:17.667923 master-0 kubenswrapper[26425]: I0217 15:49:17.667891 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" containerName="inspector-dnsmasq" Feb 17 15:49:17.672580 master-0 kubenswrapper[26425]: I0217 15:49:17.672526 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:49:17.677809 master-0 kubenswrapper[26425]: I0217 15:49:17.677762 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 17 15:49:17.677976 master-0 kubenswrapper[26425]: I0217 15:49:17.677939 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 17 15:49:17.678013 master-0 kubenswrapper[26425]: I0217 15:49:17.677982 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 17 15:49:17.678107 master-0 kubenswrapper[26425]: I0217 15:49:17.678049 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 17 15:49:17.678154 master-0 kubenswrapper[26425]: I0217 15:49:17.678130 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 17 15:49:17.697943 master-0 kubenswrapper[26425]: I0217 15:49:17.697258 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:17.750854 master-0 kubenswrapper[26425]: I0217 15:49:17.750795 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751106 master-0 kubenswrapper[26425]: I0217 15:49:17.750982 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-scripts\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751106 master-0 kubenswrapper[26425]: I0217 15:49:17.751061 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751229 master-0 kubenswrapper[26425]: I0217 15:49:17.751143 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-config\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751229 master-0 kubenswrapper[26425]: I0217 15:49:17.751214 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5ca87f26-e544-4259-a902-8a8fb4834a4e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751342 master-0 kubenswrapper[26425]: I0217 15:49:17.751260 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvbq\" (UniqueName: \"kubernetes.io/projected/5ca87f26-e544-4259-a902-8a8fb4834a4e-kube-api-access-dgvbq\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751439 master-0 kubenswrapper[26425]: I0217 15:49:17.751391 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751525 master-0 kubenswrapper[26425]: I0217 15:49:17.751464 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.751576 master-0 kubenswrapper[26425]: I0217 15:49:17.751531 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853358 master-0 kubenswrapper[26425]: I0217 15:49:17.853290 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853627 master-0 kubenswrapper[26425]: I0217 15:49:17.853454 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853627 master-0 kubenswrapper[26425]: I0217 15:49:17.853544 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-scripts\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853627 master-0 kubenswrapper[26425]: I0217 15:49:17.853578 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853627 master-0 kubenswrapper[26425]: I0217 15:49:17.853610 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-config\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853765 master-0 kubenswrapper[26425]: I0217 15:49:17.853641 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5ca87f26-e544-4259-a902-8a8fb4834a4e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853765 master-0 kubenswrapper[26425]: I0217 15:49:17.853668 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvbq\" (UniqueName: \"kubernetes.io/projected/5ca87f26-e544-4259-a902-8a8fb4834a4e-kube-api-access-dgvbq\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853765 master-0 kubenswrapper[26425]: I0217 15:49:17.853697 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.853765 master-0 kubenswrapper[26425]: I0217 15:49:17.853720 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.856966 master-0 kubenswrapper[26425]: I0217 15:49:17.856913 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.859417 master-0 kubenswrapper[26425]: I0217 15:49:17.859305 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.859533 master-0 kubenswrapper[26425]: I0217 15:49:17.859507 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/5ca87f26-e544-4259-a902-8a8fb4834a4e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.859776 master-0 kubenswrapper[26425]: I0217 15:49:17.859739 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5ca87f26-e544-4259-a902-8a8fb4834a4e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.859985 master-0 kubenswrapper[26425]: I0217 15:49:17.859948 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.860047 master-0 kubenswrapper[26425]: I0217 15:49:17.859962 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-scripts\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.862444 master-0 kubenswrapper[26425]: I0217 15:49:17.861763 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.862444 master-0 kubenswrapper[26425]: I0217 15:49:17.862407 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca87f26-e544-4259-a902-8a8fb4834a4e-config\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:17.883086 master-0 kubenswrapper[26425]: I0217 15:49:17.883026 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvbq\" (UniqueName: \"kubernetes.io/projected/5ca87f26-e544-4259-a902-8a8fb4834a4e-kube-api-access-dgvbq\") pod \"ironic-inspector-0\" (UID: \"5ca87f26-e544-4259-a902-8a8fb4834a4e\") " pod="openstack/ironic-inspector-0" Feb 17 15:49:18.009237 master-0 kubenswrapper[26425]: I0217 15:49:18.009108 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 17 15:49:18.408446 master-0 kubenswrapper[26425]: I0217 15:49:18.408385 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8717e84-c9b5-4eff-9221-13fb96fac595" path="/var/lib/kubelet/pods/f8717e84-c9b5-4eff-9221-13fb96fac595/volumes" Feb 17 15:49:18.558319 master-0 kubenswrapper[26425]: I0217 15:49:18.558252 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 17 15:49:19.318851 master-0 kubenswrapper[26425]: I0217 15:49:19.318687 26425 generic.go:334] "Generic (PLEG): container finished" podID="5ca87f26-e544-4259-a902-8a8fb4834a4e" containerID="eaeafdedc620f4b263dcc31444f2c3272bb60cec52ba86894f8ac2fed23bd0d7" exitCode=0 Feb 17 15:49:19.318851 master-0 kubenswrapper[26425]: I0217 15:49:19.318748 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerDied","Data":"eaeafdedc620f4b263dcc31444f2c3272bb60cec52ba86894f8ac2fed23bd0d7"} Feb 17 15:49:19.318851 master-0 kubenswrapper[26425]: I0217 15:49:19.318780 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"2f59ac0b6ffa8d07bfe805a0125e93fc92499703159938b706a038fcec0b27cb"} Feb 17 15:49:21.342925 master-0 kubenswrapper[26425]: I0217 15:49:21.342864 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"9d1552f9f958272c83f9523dfe3c441038968b53244ce6c195e257d1cb5fe05d"} Feb 17 15:49:22.368686 master-0 kubenswrapper[26425]: I0217 15:49:22.366150 26425 generic.go:334] "Generic (PLEG): container finished" podID="5ca87f26-e544-4259-a902-8a8fb4834a4e" containerID="9d1552f9f958272c83f9523dfe3c441038968b53244ce6c195e257d1cb5fe05d" exitCode=0 Feb 17 15:49:22.368686 master-0 kubenswrapper[26425]: I0217 15:49:22.366212 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerDied","Data":"9d1552f9f958272c83f9523dfe3c441038968b53244ce6c195e257d1cb5fe05d"} Feb 17 15:49:23.390152 master-0 kubenswrapper[26425]: I0217 15:49:23.390001 26425 generic.go:334] "Generic (PLEG): container finished" podID="62ef05a2-b338-4a41-9b86-147f7dd1e242" containerID="2178fd764ff761480f789a27e868dfa3b752f5be6338e55bf282759a55624309" exitCode=0 Feb 17 15:49:23.390679 master-0 kubenswrapper[26425]: I0217 15:49:23.390146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" event={"ID":"62ef05a2-b338-4a41-9b86-147f7dd1e242","Type":"ContainerDied","Data":"2178fd764ff761480f789a27e868dfa3b752f5be6338e55bf282759a55624309"} Feb 17 15:49:23.396366 master-0 kubenswrapper[26425]: I0217 15:49:23.396303 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"d5fc3a60a0593c75eb8af176373ededc96912293c5c49d72d7edfa7f03d15b6b"} Feb 17 15:49:23.396543 master-0 kubenswrapper[26425]: I0217 15:49:23.396373 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"8e03d47666af57ec064bba9aef6d56a36850a7756bb673e6ba685426ad1b6d33"} Feb 17 15:49:24.457377 master-0 kubenswrapper[26425]: I0217 15:49:24.456727 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"d774d7243a0fe0954f18eaa003a4223c69be75b89647f271e608b1c4ea156475"} Feb 17 15:49:24.457377 master-0 kubenswrapper[26425]: I0217 15:49:24.457018 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"2e5c8643feeb66eb0ac993bb60994796adb876c0877d710d04915d4efcc572d7"} Feb 17 15:49:24.918787 master-0 kubenswrapper[26425]: I0217 15:49:24.918151 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:49:25.056874 master-0 kubenswrapper[26425]: I0217 15:49:25.056749 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data\") pod \"62ef05a2-b338-4a41-9b86-147f7dd1e242\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " Feb 17 15:49:25.056874 master-0 kubenswrapper[26425]: I0217 15:49:25.056811 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wdvf\" (UniqueName: \"kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf\") pod \"62ef05a2-b338-4a41-9b86-147f7dd1e242\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " Feb 17 15:49:25.056874 master-0 kubenswrapper[26425]: I0217 15:49:25.056852 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle\") pod \"62ef05a2-b338-4a41-9b86-147f7dd1e242\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " Feb 17 15:49:25.059861 master-0 kubenswrapper[26425]: I0217 15:49:25.057137 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts\") pod \"62ef05a2-b338-4a41-9b86-147f7dd1e242\" (UID: \"62ef05a2-b338-4a41-9b86-147f7dd1e242\") " Feb 17 15:49:25.060552 master-0 kubenswrapper[26425]: I0217 15:49:25.060354 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf" (OuterVolumeSpecName: "kube-api-access-9wdvf") pod "62ef05a2-b338-4a41-9b86-147f7dd1e242" (UID: "62ef05a2-b338-4a41-9b86-147f7dd1e242"). InnerVolumeSpecName "kube-api-access-9wdvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:25.070357 master-0 kubenswrapper[26425]: I0217 15:49:25.070270 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts" (OuterVolumeSpecName: "scripts") pod "62ef05a2-b338-4a41-9b86-147f7dd1e242" (UID: "62ef05a2-b338-4a41-9b86-147f7dd1e242"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:25.092242 master-0 kubenswrapper[26425]: I0217 15:49:25.092150 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62ef05a2-b338-4a41-9b86-147f7dd1e242" (UID: "62ef05a2-b338-4a41-9b86-147f7dd1e242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:25.100420 master-0 kubenswrapper[26425]: I0217 15:49:25.100361 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data" (OuterVolumeSpecName: "config-data") pod "62ef05a2-b338-4a41-9b86-147f7dd1e242" (UID: "62ef05a2-b338-4a41-9b86-147f7dd1e242"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:25.159719 master-0 kubenswrapper[26425]: I0217 15:49:25.159642 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:25.159719 master-0 kubenswrapper[26425]: I0217 15:49:25.159716 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wdvf\" (UniqueName: \"kubernetes.io/projected/62ef05a2-b338-4a41-9b86-147f7dd1e242-kube-api-access-9wdvf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:25.159719 master-0 kubenswrapper[26425]: I0217 15:49:25.159727 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:25.159719 master-0 kubenswrapper[26425]: I0217 15:49:25.159736 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ef05a2-b338-4a41-9b86-147f7dd1e242-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:25.482731 master-0 kubenswrapper[26425]: I0217 15:49:25.482639 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" event={"ID":"62ef05a2-b338-4a41-9b86-147f7dd1e242","Type":"ContainerDied","Data":"7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc"} Feb 17 15:49:25.482731 master-0 kubenswrapper[26425]: I0217 15:49:25.482689 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c18aca93825cc0aa6be7c292cdb8cb6f8be41d52805e4fd5aba218a85c587bc" Feb 17 15:49:25.484023 master-0 kubenswrapper[26425]: I0217 15:49:25.482738 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gbxf" Feb 17 15:49:25.491481 master-0 kubenswrapper[26425]: I0217 15:49:25.490661 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"5ca87f26-e544-4259-a902-8a8fb4834a4e","Type":"ContainerStarted","Data":"0962a714162ed0f1c75060a0b6da2511617a6517c8718a8b33f6038037ea711a"} Feb 17 15:49:25.491481 master-0 kubenswrapper[26425]: I0217 15:49:25.490930 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:25.551971 master-0 kubenswrapper[26425]: I0217 15:49:25.549938 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.549915396 podStartE2EDuration="8.549915396s" podCreationTimestamp="2026-02-17 15:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:25.5392561 +0000 UTC m=+2027.430979938" watchObservedRunningTime="2026-02-17 15:49:25.549915396 +0000 UTC m=+2027.441639234" Feb 17 15:49:25.596665 master-0 kubenswrapper[26425]: I0217 15:49:25.596246 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 15:49:25.602779 master-0 kubenswrapper[26425]: E0217 15:49:25.602700 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ef05a2-b338-4a41-9b86-147f7dd1e242" containerName="nova-cell0-conductor-db-sync" Feb 17 15:49:25.602779 master-0 kubenswrapper[26425]: I0217 15:49:25.602765 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ef05a2-b338-4a41-9b86-147f7dd1e242" containerName="nova-cell0-conductor-db-sync" Feb 17 15:49:25.603449 master-0 kubenswrapper[26425]: I0217 15:49:25.603407 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ef05a2-b338-4a41-9b86-147f7dd1e242" containerName="nova-cell0-conductor-db-sync" Feb 17 15:49:25.604995 master-0 kubenswrapper[26425]: I0217 15:49:25.604934 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.617698 master-0 kubenswrapper[26425]: I0217 15:49:25.614272 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 15:49:25.643098 master-0 kubenswrapper[26425]: I0217 15:49:25.643039 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 15:49:25.692957 master-0 kubenswrapper[26425]: I0217 15:49:25.692880 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.693332 master-0 kubenswrapper[26425]: I0217 15:49:25.693305 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.693509 master-0 kubenswrapper[26425]: I0217 15:49:25.693487 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bbs\" (UniqueName: \"kubernetes.io/projected/96495829-05d5-428e-a6fc-22a24b955f2a-kube-api-access-86bbs\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.799809 master-0 kubenswrapper[26425]: I0217 15:49:25.796173 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.799809 master-0 kubenswrapper[26425]: I0217 15:49:25.796250 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.799809 master-0 kubenswrapper[26425]: I0217 15:49:25.796281 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bbs\" (UniqueName: \"kubernetes.io/projected/96495829-05d5-428e-a6fc-22a24b955f2a-kube-api-access-86bbs\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.813480 master-0 kubenswrapper[26425]: I0217 15:49:25.804246 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.824805 master-0 kubenswrapper[26425]: I0217 15:49:25.824167 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96495829-05d5-428e-a6fc-22a24b955f2a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.831050 master-0 kubenswrapper[26425]: I0217 15:49:25.830983 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bbs\" (UniqueName: \"kubernetes.io/projected/96495829-05d5-428e-a6fc-22a24b955f2a-kube-api-access-86bbs\") pod \"nova-cell0-conductor-0\" (UID: \"96495829-05d5-428e-a6fc-22a24b955f2a\") " pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:25.937551 master-0 kubenswrapper[26425]: I0217 15:49:25.936016 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:26.422562 master-0 kubenswrapper[26425]: I0217 15:49:26.422509 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 15:49:26.429045 master-0 kubenswrapper[26425]: W0217 15:49:26.429009 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96495829_05d5_428e_a6fc_22a24b955f2a.slice/crio-82224b0e145ef73d6184b9b28f615a0a471f2a573dbc4a1579e52b266bb35cb2 WatchSource:0}: Error finding container 82224b0e145ef73d6184b9b28f615a0a471f2a573dbc4a1579e52b266bb35cb2: Status 404 returned error can't find the container with id 82224b0e145ef73d6184b9b28f615a0a471f2a573dbc4a1579e52b266bb35cb2 Feb 17 15:49:26.500806 master-0 kubenswrapper[26425]: I0217 15:49:26.500715 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"96495829-05d5-428e-a6fc-22a24b955f2a","Type":"ContainerStarted","Data":"82224b0e145ef73d6184b9b28f615a0a471f2a573dbc4a1579e52b266bb35cb2"} Feb 17 15:49:26.501320 master-0 kubenswrapper[26425]: I0217 15:49:26.501256 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:27.516750 master-0 kubenswrapper[26425]: I0217 15:49:27.516669 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"96495829-05d5-428e-a6fc-22a24b955f2a","Type":"ContainerStarted","Data":"ece026ff8be5d77ca81295e08ee53d9bc9d1de44659f29a7c7f8ee24f9437be5"} Feb 17 15:49:27.558079 master-0 kubenswrapper[26425]: I0217 15:49:27.557960 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:27.558694 master-0 kubenswrapper[26425]: I0217 15:49:27.558256 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.5582358689999998 podStartE2EDuration="2.558235869s" podCreationTimestamp="2026-02-17 15:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:27.551923128 +0000 UTC m=+2029.443646956" watchObservedRunningTime="2026-02-17 15:49:27.558235869 +0000 UTC m=+2029.449959677" Feb 17 15:49:28.009406 master-0 kubenswrapper[26425]: I0217 15:49:28.009340 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.009873 master-0 kubenswrapper[26425]: I0217 15:49:28.009438 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.009873 master-0 kubenswrapper[26425]: I0217 15:49:28.009500 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.009873 master-0 kubenswrapper[26425]: I0217 15:49:28.009514 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.030288 master-0 kubenswrapper[26425]: I0217 15:49:28.030063 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.033273 master-0 kubenswrapper[26425]: I0217 15:49:28.033159 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.529615 master-0 kubenswrapper[26425]: I0217 15:49:28.529532 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:28.530668 master-0 kubenswrapper[26425]: I0217 15:49:28.530630 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.537952 master-0 kubenswrapper[26425]: I0217 15:49:28.537791 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:28.541676 master-0 kubenswrapper[26425]: I0217 15:49:28.541643 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 17 15:49:35.971494 master-0 kubenswrapper[26425]: I0217 15:49:35.971417 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 15:49:36.496483 master-0 kubenswrapper[26425]: I0217 15:49:36.490915 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9btmx"] Feb 17 15:49:36.496483 master-0 kubenswrapper[26425]: I0217 15:49:36.494580 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.507431 master-0 kubenswrapper[26425]: I0217 15:49:36.499387 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 15:49:36.507431 master-0 kubenswrapper[26425]: I0217 15:49:36.500121 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 15:49:36.536391 master-0 kubenswrapper[26425]: I0217 15:49:36.530414 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9btmx"] Feb 17 15:49:36.597741 master-0 kubenswrapper[26425]: I0217 15:49:36.597676 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.597966 master-0 kubenswrapper[26425]: I0217 15:49:36.597770 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.597966 master-0 kubenswrapper[26425]: I0217 15:49:36.597795 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.597966 master-0 kubenswrapper[26425]: I0217 15:49:36.597941 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2r2\" (UniqueName: \"kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.627121 master-0 kubenswrapper[26425]: I0217 15:49:36.627042 26425 generic.go:334] "Generic (PLEG): container finished" podID="1c26c340-473b-49c9-a62f-1915fac7b655" containerID="8a9973a7ff3bb106d664be46a709e58a5e474e75a7df17f0390455d398f7d950" exitCode=0 Feb 17 15:49:36.627121 master-0 kubenswrapper[26425]: I0217 15:49:36.627093 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerDied","Data":"8a9973a7ff3bb106d664be46a709e58a5e474e75a7df17f0390455d398f7d950"} Feb 17 15:49:36.718974 master-0 kubenswrapper[26425]: I0217 15:49:36.718137 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.718974 master-0 kubenswrapper[26425]: I0217 15:49:36.718255 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.719242 master-0 kubenswrapper[26425]: I0217 15:49:36.718293 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.719784 master-0 kubenswrapper[26425]: I0217 15:49:36.719329 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2r2\" (UniqueName: \"kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.724589 master-0 kubenswrapper[26425]: I0217 15:49:36.721666 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 17 15:49:36.724589 master-0 kubenswrapper[26425]: I0217 15:49:36.724110 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.730577 master-0 kubenswrapper[26425]: I0217 15:49:36.727704 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 17 15:49:36.731810 master-0 kubenswrapper[26425]: I0217 15:49:36.731747 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.735874 master-0 kubenswrapper[26425]: I0217 15:49:36.735601 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.739769 master-0 kubenswrapper[26425]: I0217 15:49:36.739677 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 17 15:49:36.795599 master-0 kubenswrapper[26425]: I0217 15:49:36.794900 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2r2\" (UniqueName: \"kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.799509 master-0 kubenswrapper[26425]: I0217 15:49:36.798249 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9btmx\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.831476 master-0 kubenswrapper[26425]: I0217 15:49:36.826212 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:36.831476 master-0 kubenswrapper[26425]: I0217 15:49:36.828917 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:36.847474 master-0 kubenswrapper[26425]: I0217 15:49:36.840008 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 15:49:36.847474 master-0 kubenswrapper[26425]: I0217 15:49:36.841250 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpn4\" (UniqueName: \"kubernetes.io/projected/04ef5635-e3f9-48a4-a474-2927a611d808-kube-api-access-4jpn4\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.847474 master-0 kubenswrapper[26425]: I0217 15:49:36.841399 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.847474 master-0 kubenswrapper[26425]: I0217 15:49:36.843784 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.847474 master-0 kubenswrapper[26425]: I0217 15:49:36.845022 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:36.870472 master-0 kubenswrapper[26425]: I0217 15:49:36.869664 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:36.886470 master-0 kubenswrapper[26425]: I0217 15:49:36.886048 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:49:36.892473 master-0 kubenswrapper[26425]: I0217 15:49:36.887398 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:36.909477 master-0 kubenswrapper[26425]: I0217 15:49:36.903260 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.946433 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.946557 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jpn4\" (UniqueName: \"kubernetes.io/projected/04ef5635-e3f9-48a4-a474-2927a611d808-kube-api-access-4jpn4\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947103 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947138 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947159 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947174 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hss2\" (UniqueName: \"kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947203 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:36.947474 master-0 kubenswrapper[26425]: I0217 15:49:36.947240 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:36.959329 master-0 kubenswrapper[26425]: I0217 15:49:36.948053 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpvf\" (UniqueName: \"kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:36.959329 master-0 kubenswrapper[26425]: I0217 15:49:36.948105 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:36.959329 master-0 kubenswrapper[26425]: I0217 15:49:36.954235 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.977486 master-0 kubenswrapper[26425]: I0217 15:49:36.971248 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:36.977486 master-0 kubenswrapper[26425]: I0217 15:49:36.973559 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:36.977486 master-0 kubenswrapper[26425]: I0217 15:49:36.977010 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 15:49:36.977486 master-0 kubenswrapper[26425]: I0217 15:49:36.977220 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ef5635-e3f9-48a4-a474-2927a611d808-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:36.989485 master-0 kubenswrapper[26425]: I0217 15:49:36.987278 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:37.004896 master-0 kubenswrapper[26425]: I0217 15:49:36.997218 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jpn4\" (UniqueName: \"kubernetes.io/projected/04ef5635-e3f9-48a4-a474-2927a611d808-kube-api-access-4jpn4\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"04ef5635-e3f9-48a4-a474-2927a611d808\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:37.020479 master-0 kubenswrapper[26425]: I0217 15:49:37.019041 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:49:37.076749 master-0 kubenswrapper[26425]: I0217 15:49:37.076541 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.078729 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.079524 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080049 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080093 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080139 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hss2\" (UniqueName: \"kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080155 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080349 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.081348 master-0 kubenswrapper[26425]: I0217 15:49:37.080381 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdk9\" (UniqueName: \"kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.081957 master-0 kubenswrapper[26425]: I0217 15:49:37.081648 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.081957 master-0 kubenswrapper[26425]: I0217 15:49:37.081705 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.081957 master-0 kubenswrapper[26425]: I0217 15:49:37.081779 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwpvf\" (UniqueName: \"kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.085897 master-0 kubenswrapper[26425]: I0217 15:49:37.085827 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 15:49:37.088360 master-0 kubenswrapper[26425]: I0217 15:49:37.088290 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:37.092650 master-0 kubenswrapper[26425]: I0217 15:49:37.091365 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.123004 master-0 kubenswrapper[26425]: I0217 15:49:37.122437 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.124423 master-0 kubenswrapper[26425]: I0217 15:49:37.124187 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.134054 master-0 kubenswrapper[26425]: I0217 15:49:37.134006 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hss2\" (UniqueName: \"kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.137544 master-0 kubenswrapper[26425]: I0217 15:49:37.137131 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data\") pod \"nova-api-0\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " pod="openstack/nova-api-0" Feb 17 15:49:37.159763 master-0 kubenswrapper[26425]: I0217 15:49:37.159712 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.162209 master-0 kubenswrapper[26425]: I0217 15:49:37.157091 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwpvf\" (UniqueName: \"kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.181928 master-0 kubenswrapper[26425]: I0217 15:49:37.181773 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:49:37.184043 master-0 kubenswrapper[26425]: I0217 15:49:37.183986 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54lc\" (UniqueName: \"kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.184216 master-0 kubenswrapper[26425]: I0217 15:49:37.184058 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.184216 master-0 kubenswrapper[26425]: I0217 15:49:37.184083 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.184415 master-0 kubenswrapper[26425]: I0217 15:49:37.184377 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.184958 master-0 kubenswrapper[26425]: I0217 15:49:37.184535 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.184958 master-0 kubenswrapper[26425]: I0217 15:49:37.184575 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdk9\" (UniqueName: \"kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.185138 master-0 kubenswrapper[26425]: I0217 15:49:37.185067 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.185269 master-0 kubenswrapper[26425]: I0217 15:49:37.185224 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.189890 master-0 kubenswrapper[26425]: I0217 15:49:37.189841 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.190339 master-0 kubenswrapper[26425]: I0217 15:49:37.190307 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.209328 master-0 kubenswrapper[26425]: I0217 15:49:37.209273 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdk9\" (UniqueName: \"kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9\") pod \"nova-scheduler-0\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:37.263937 master-0 kubenswrapper[26425]: I0217 15:49:37.263869 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:49:37.287120 master-0 kubenswrapper[26425]: I0217 15:49:37.287050 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.287340 master-0 kubenswrapper[26425]: I0217 15:49:37.287179 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nv5k\" (UniqueName: \"kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.287340 master-0 kubenswrapper[26425]: I0217 15:49:37.287209 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.287340 master-0 kubenswrapper[26425]: I0217 15:49:37.287311 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.287586 master-0 kubenswrapper[26425]: I0217 15:49:37.287556 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.287671 master-0 kubenswrapper[26425]: I0217 15:49:37.287649 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.287746 master-0 kubenswrapper[26425]: I0217 15:49:37.287724 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.287855 master-0 kubenswrapper[26425]: I0217 15:49:37.287826 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l54lc\" (UniqueName: \"kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.287981 master-0 kubenswrapper[26425]: I0217 15:49:37.287957 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.289614 master-0 kubenswrapper[26425]: I0217 15:49:37.289585 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.290774 master-0 kubenswrapper[26425]: I0217 15:49:37.290741 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:37.291479 master-0 kubenswrapper[26425]: I0217 15:49:37.291431 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.294490 master-0 kubenswrapper[26425]: I0217 15:49:37.294362 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.303377 master-0 kubenswrapper[26425]: I0217 15:49:37.303239 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.309569 master-0 kubenswrapper[26425]: I0217 15:49:37.309537 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l54lc\" (UniqueName: \"kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc\") pod \"nova-metadata-0\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " pod="openstack/nova-metadata-0" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.396807 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.396954 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.397041 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nv5k\" (UniqueName: \"kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.397070 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.397129 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.397199 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.398527 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.399827 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.400176 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.400519 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.401660 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.403569 master-0 kubenswrapper[26425]: I0217 15:49:37.401981 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.413298 master-0 kubenswrapper[26425]: I0217 15:49:37.413045 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:37.417321 master-0 kubenswrapper[26425]: I0217 15:49:37.417254 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nv5k\" (UniqueName: \"kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k\") pod \"dnsmasq-dns-78d5d45447-bfqg5\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.460240 master-0 kubenswrapper[26425]: I0217 15:49:37.458728 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:37.466103 master-0 kubenswrapper[26425]: I0217 15:49:37.466051 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:37.514983 master-0 kubenswrapper[26425]: I0217 15:49:37.514910 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:37.549239 master-0 kubenswrapper[26425]: I0217 15:49:37.549170 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9btmx"] Feb 17 15:49:37.552632 master-0 kubenswrapper[26425]: W0217 15:49:37.552583 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5964ec6_84ef_4164_8701_252638ec2109.slice/crio-27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f WatchSource:0}: Error finding container 27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f: Status 404 returned error can't find the container with id 27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f Feb 17 15:49:37.697310 master-0 kubenswrapper[26425]: I0217 15:49:37.697264 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9btmx" event={"ID":"a5964ec6-84ef-4164-8701-252638ec2109","Type":"ContainerStarted","Data":"27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f"} Feb 17 15:49:37.740314 master-0 kubenswrapper[26425]: I0217 15:49:37.740251 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4vxwz"] Feb 17 15:49:37.763076 master-0 kubenswrapper[26425]: I0217 15:49:37.762950 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"af522c52292a0e92c9c755b7a196b75ba6f01dbf8c7a589288611ff864073eb2"} Feb 17 15:49:37.763076 master-0 kubenswrapper[26425]: I0217 15:49:37.763072 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.766201 master-0 kubenswrapper[26425]: I0217 15:49:37.765812 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 15:49:37.766303 master-0 kubenswrapper[26425]: I0217 15:49:37.766247 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 15:49:37.767757 master-0 kubenswrapper[26425]: I0217 15:49:37.767683 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4vxwz"] Feb 17 15:49:37.811068 master-0 kubenswrapper[26425]: I0217 15:49:37.810498 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.811068 master-0 kubenswrapper[26425]: I0217 15:49:37.810849 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlb8\" (UniqueName: \"kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.811068 master-0 kubenswrapper[26425]: I0217 15:49:37.810884 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.811068 master-0 kubenswrapper[26425]: I0217 15:49:37.810908 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.914935 master-0 kubenswrapper[26425]: I0217 15:49:37.914332 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djlb8\" (UniqueName: \"kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.914935 master-0 kubenswrapper[26425]: I0217 15:49:37.914389 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.914935 master-0 kubenswrapper[26425]: I0217 15:49:37.914412 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.914935 master-0 kubenswrapper[26425]: I0217 15:49:37.914474 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.920357 master-0 kubenswrapper[26425]: I0217 15:49:37.920302 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 17 15:49:37.920729 master-0 kubenswrapper[26425]: I0217 15:49:37.920683 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.921830 master-0 kubenswrapper[26425]: I0217 15:49:37.921781 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.925398 master-0 kubenswrapper[26425]: I0217 15:49:37.925372 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:37.933252 master-0 kubenswrapper[26425]: I0217 15:49:37.933193 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djlb8\" (UniqueName: \"kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8\") pod \"nova-cell1-conductor-db-sync-4vxwz\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:38.013363 master-0 kubenswrapper[26425]: I0217 15:49:38.013314 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:38.251064 master-0 kubenswrapper[26425]: I0217 15:49:38.250996 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:38.254581 master-0 kubenswrapper[26425]: W0217 15:49:38.254243 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad573e8_9c29_4564_b2ea_2c75467ff750.slice/crio-47b4994a346f74950a88465427ca52b22a09ceac6910c45ddbdc885939e9627e WatchSource:0}: Error finding container 47b4994a346f74950a88465427ca52b22a09ceac6910c45ddbdc885939e9627e: Status 404 returned error can't find the container with id 47b4994a346f74950a88465427ca52b22a09ceac6910c45ddbdc885939e9627e Feb 17 15:49:38.614318 master-0 kubenswrapper[26425]: I0217 15:49:38.613299 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:49:38.657423 master-0 kubenswrapper[26425]: I0217 15:49:38.657347 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:38.669065 master-0 kubenswrapper[26425]: W0217 15:49:38.669006 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb02a0a47_ae20_4062_bd49_80724d6f70fd.slice/crio-a838040a28e948990b469a3af8ac3a1a8bdecaef357e1b55f9e07ca1aa70b8db WatchSource:0}: Error finding container a838040a28e948990b469a3af8ac3a1a8bdecaef357e1b55f9e07ca1aa70b8db: Status 404 returned error can't find the container with id a838040a28e948990b469a3af8ac3a1a8bdecaef357e1b55f9e07ca1aa70b8db Feb 17 15:49:38.671127 master-0 kubenswrapper[26425]: I0217 15:49:38.671082 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:38.687568 master-0 kubenswrapper[26425]: I0217 15:49:38.687501 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:49:38.783617 master-0 kubenswrapper[26425]: I0217 15:49:38.783535 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" event={"ID":"b02a0a47-ae20-4062-bd49-80724d6f70fd","Type":"ContainerStarted","Data":"a838040a28e948990b469a3af8ac3a1a8bdecaef357e1b55f9e07ca1aa70b8db"} Feb 17 15:49:38.784996 master-0 kubenswrapper[26425]: I0217 15:49:38.784964 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9btmx" event={"ID":"a5964ec6-84ef-4164-8701-252638ec2109","Type":"ContainerStarted","Data":"0137005ba5d60d8db1168f17cfcc284062553c4866d6b7f97c77299378990b3e"} Feb 17 15:49:38.786058 master-0 kubenswrapper[26425]: I0217 15:49:38.786027 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerStarted","Data":"47b4994a346f74950a88465427ca52b22a09ceac6910c45ddbdc885939e9627e"} Feb 17 15:49:38.787323 master-0 kubenswrapper[26425]: I0217 15:49:38.787290 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f1003dc-30ca-4cd8-9489-c37262a5f45e","Type":"ContainerStarted","Data":"95929ff144f08112eef61dcf4eb00cf2b61ba4630ea152364bb75c434594b156"} Feb 17 15:49:38.788908 master-0 kubenswrapper[26425]: I0217 15:49:38.788822 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerStarted","Data":"194fe25db6f24662cd8862b7b7ff7e155361cbb7b0a9ea5bd61572b49c57faee"} Feb 17 15:49:38.791329 master-0 kubenswrapper[26425]: I0217 15:49:38.791273 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4vxwz"] Feb 17 15:49:38.793591 master-0 kubenswrapper[26425]: I0217 15:49:38.793544 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"323e825df45e26297ca165b0b2da30e8beee577cfd1e7c1ba9f62a0c8455c9c7"} Feb 17 15:49:38.795430 master-0 kubenswrapper[26425]: I0217 15:49:38.795394 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"04ef5635-e3f9-48a4-a474-2927a611d808","Type":"ContainerStarted","Data":"452a09993396d7d0129cda5068b873f70d6964c702c2a78632f57dd715a92138"} Feb 17 15:49:38.796637 master-0 kubenswrapper[26425]: I0217 15:49:38.796585 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1192e109-48bb-4d67-a347-33ca457d8368","Type":"ContainerStarted","Data":"f601b30e563447c7d9de7c43b5a95e1766e2449174e3bd70825d80ba33951174"} Feb 17 15:49:38.836556 master-0 kubenswrapper[26425]: I0217 15:49:38.836406 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9btmx" podStartSLOduration=2.836384616 podStartE2EDuration="2.836384616s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:38.806754996 +0000 UTC m=+2040.698478834" watchObservedRunningTime="2026-02-17 15:49:38.836384616 +0000 UTC m=+2040.728108434" Feb 17 15:49:39.812854 master-0 kubenswrapper[26425]: I0217 15:49:39.812787 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c26c340-473b-49c9-a62f-1915fac7b655","Type":"ContainerStarted","Data":"fd63d0bff994e4d23b311ec988948a643d4ffcfb3bca5c65c6f79b3d90dc616a"} Feb 17 15:49:39.814405 master-0 kubenswrapper[26425]: I0217 15:49:39.814366 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 17 15:49:39.814494 master-0 kubenswrapper[26425]: I0217 15:49:39.814419 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 17 15:49:39.816572 master-0 kubenswrapper[26425]: I0217 15:49:39.816519 26425 generic.go:334] "Generic (PLEG): container finished" podID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerID="723aef3295e4a703c4c0db3247739a020d285be26b92771a83b705d7fe87188e" exitCode=0 Feb 17 15:49:39.816626 master-0 kubenswrapper[26425]: I0217 15:49:39.816588 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" event={"ID":"b02a0a47-ae20-4062-bd49-80724d6f70fd","Type":"ContainerDied","Data":"723aef3295e4a703c4c0db3247739a020d285be26b92771a83b705d7fe87188e"} Feb 17 15:49:39.820196 master-0 kubenswrapper[26425]: I0217 15:49:39.820152 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" event={"ID":"0e34b203-c823-4193-99ca-d9d8f89c1c41","Type":"ContainerStarted","Data":"59c85c1e1341b23cfffb0a683db7dd911aa6d9778114ba9881eae9d12be587a1"} Feb 17 15:49:39.820258 master-0 kubenswrapper[26425]: I0217 15:49:39.820198 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" event={"ID":"0e34b203-c823-4193-99ca-d9d8f89c1c41","Type":"ContainerStarted","Data":"251ac77deefcfe4349b97d3186b39c586efb49c1a60823368bb2bc020fb9e56b"} Feb 17 15:49:40.024489 master-0 kubenswrapper[26425]: I0217 15:49:40.019481 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" podStartSLOduration=3.019448898 podStartE2EDuration="3.019448898s" podCreationTimestamp="2026-02-17 15:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:39.96320384 +0000 UTC m=+2041.854927668" watchObservedRunningTime="2026-02-17 15:49:40.019448898 +0000 UTC m=+2041.911172716" Feb 17 15:49:40.024489 master-0 kubenswrapper[26425]: I0217 15:49:40.023514 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=63.917810238 podStartE2EDuration="1m45.023480875s" podCreationTimestamp="2026-02-17 15:47:55 +0000 UTC" firstStartedPulling="2026-02-17 15:48:06.176331134 +0000 UTC m=+1948.068054952" lastFinishedPulling="2026-02-17 15:48:47.282001781 +0000 UTC m=+1989.173725589" observedRunningTime="2026-02-17 15:49:39.937967994 +0000 UTC m=+2041.829691832" watchObservedRunningTime="2026-02-17 15:49:40.023480875 +0000 UTC m=+2041.915204703" Feb 17 15:49:40.085770 master-0 kubenswrapper[26425]: I0217 15:49:40.085630 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 17 15:49:40.769773 master-0 kubenswrapper[26425]: I0217 15:49:40.769641 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:40.783702 master-0 kubenswrapper[26425]: I0217 15:49:40.783640 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:49:41.562250 master-0 kubenswrapper[26425]: I0217 15:49:41.562191 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 17 15:49:41.857104 master-0 kubenswrapper[26425]: I0217 15:49:41.856911 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" event={"ID":"b02a0a47-ae20-4062-bd49-80724d6f70fd","Type":"ContainerStarted","Data":"1cae0dcde166c1253a4174b944e897efb4cfda61044b831b5015242542209a17"} Feb 17 15:49:42.050630 master-0 kubenswrapper[26425]: I0217 15:49:42.050547 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" podStartSLOduration=5.050527557 podStartE2EDuration="5.050527557s" podCreationTimestamp="2026-02-17 15:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:42.027405092 +0000 UTC m=+2043.919128970" watchObservedRunningTime="2026-02-17 15:49:42.050527557 +0000 UTC m=+2043.942251375" Feb 17 15:49:42.517299 master-0 kubenswrapper[26425]: I0217 15:49:42.517244 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:42.877410 master-0 kubenswrapper[26425]: I0217 15:49:42.877331 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerStarted","Data":"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392"} Feb 17 15:49:42.877410 master-0 kubenswrapper[26425]: I0217 15:49:42.877392 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerStarted","Data":"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229"} Feb 17 15:49:42.881982 master-0 kubenswrapper[26425]: I0217 15:49:42.881914 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f1003dc-30ca-4cd8-9489-c37262a5f45e","Type":"ContainerStarted","Data":"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c"} Feb 17 15:49:42.882144 master-0 kubenswrapper[26425]: I0217 15:49:42.882026 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c" gracePeriod=30 Feb 17 15:49:42.886327 master-0 kubenswrapper[26425]: I0217 15:49:42.886264 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerStarted","Data":"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084"} Feb 17 15:49:42.886327 master-0 kubenswrapper[26425]: I0217 15:49:42.886317 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerStarted","Data":"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1"} Feb 17 15:49:42.886517 master-0 kubenswrapper[26425]: I0217 15:49:42.886439 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-log" containerID="cri-o://394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" gracePeriod=30 Feb 17 15:49:42.886582 master-0 kubenswrapper[26425]: I0217 15:49:42.886555 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-metadata" containerID="cri-o://f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" gracePeriod=30 Feb 17 15:49:42.889357 master-0 kubenswrapper[26425]: I0217 15:49:42.889310 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1192e109-48bb-4d67-a347-33ca457d8368","Type":"ContainerStarted","Data":"1b84f340054057f5f2c0c7db209143f6bdaf6592bbbff4f4331a44c22649ad9f"} Feb 17 15:49:42.914706 master-0 kubenswrapper[26425]: I0217 15:49:42.913549 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.437133403 podStartE2EDuration="6.913529713s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="2026-02-17 15:49:38.263909147 +0000 UTC m=+2040.155632965" lastFinishedPulling="2026-02-17 15:49:41.740305457 +0000 UTC m=+2043.632029275" observedRunningTime="2026-02-17 15:49:42.898074673 +0000 UTC m=+2044.789798491" watchObservedRunningTime="2026-02-17 15:49:42.913529713 +0000 UTC m=+2044.805253531" Feb 17 15:49:42.948029 master-0 kubenswrapper[26425]: I0217 15:49:42.947910 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.841960282 podStartE2EDuration="6.947883897s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="2026-02-17 15:49:38.637847336 +0000 UTC m=+2040.529571154" lastFinishedPulling="2026-02-17 15:49:41.743770941 +0000 UTC m=+2043.635494769" observedRunningTime="2026-02-17 15:49:42.925203344 +0000 UTC m=+2044.816927182" watchObservedRunningTime="2026-02-17 15:49:42.947883897 +0000 UTC m=+2044.839607715" Feb 17 15:49:42.952815 master-0 kubenswrapper[26425]: I0217 15:49:42.952516 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.840854066 podStartE2EDuration="6.952490128s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="2026-02-17 15:49:38.63385275 +0000 UTC m=+2040.525576568" lastFinishedPulling="2026-02-17 15:49:41.745488812 +0000 UTC m=+2043.637212630" observedRunningTime="2026-02-17 15:49:42.942124079 +0000 UTC m=+2044.833847927" watchObservedRunningTime="2026-02-17 15:49:42.952490128 +0000 UTC m=+2044.844213956" Feb 17 15:49:42.960135 master-0 kubenswrapper[26425]: I0217 15:49:42.960078 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 17 15:49:43.020410 master-0 kubenswrapper[26425]: I0217 15:49:43.017622 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.885581788 podStartE2EDuration="7.017597299s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="2026-02-17 15:49:38.613362138 +0000 UTC m=+2040.505085946" lastFinishedPulling="2026-02-17 15:49:41.745377639 +0000 UTC m=+2043.637101457" observedRunningTime="2026-02-17 15:49:42.965556661 +0000 UTC m=+2044.857280479" watchObservedRunningTime="2026-02-17 15:49:43.017597299 +0000 UTC m=+2044.909321117" Feb 17 15:49:43.610231 master-0 kubenswrapper[26425]: I0217 15:49:43.610186 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.712196 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data\") pod \"40b9844a-bf3d-41db-9b63-8507498cb925\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.712290 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l54lc\" (UniqueName: \"kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc\") pod \"40b9844a-bf3d-41db-9b63-8507498cb925\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.712495 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs\") pod \"40b9844a-bf3d-41db-9b63-8507498cb925\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.712588 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle\") pod \"40b9844a-bf3d-41db-9b63-8507498cb925\" (UID: \"40b9844a-bf3d-41db-9b63-8507498cb925\") " Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.713143 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs" (OuterVolumeSpecName: "logs") pod "40b9844a-bf3d-41db-9b63-8507498cb925" (UID: "40b9844a-bf3d-41db-9b63-8507498cb925"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:49:43.717326 master-0 kubenswrapper[26425]: I0217 15:49:43.713520 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b9844a-bf3d-41db-9b63-8507498cb925-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:43.717851 master-0 kubenswrapper[26425]: I0217 15:49:43.717441 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc" (OuterVolumeSpecName: "kube-api-access-l54lc") pod "40b9844a-bf3d-41db-9b63-8507498cb925" (UID: "40b9844a-bf3d-41db-9b63-8507498cb925"). InnerVolumeSpecName "kube-api-access-l54lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:43.765106 master-0 kubenswrapper[26425]: I0217 15:49:43.764979 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data" (OuterVolumeSpecName: "config-data") pod "40b9844a-bf3d-41db-9b63-8507498cb925" (UID: "40b9844a-bf3d-41db-9b63-8507498cb925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:43.806736 master-0 kubenswrapper[26425]: I0217 15:49:43.806673 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40b9844a-bf3d-41db-9b63-8507498cb925" (UID: "40b9844a-bf3d-41db-9b63-8507498cb925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:43.815794 master-0 kubenswrapper[26425]: I0217 15:49:43.815742 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:43.815794 master-0 kubenswrapper[26425]: I0217 15:49:43.815785 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b9844a-bf3d-41db-9b63-8507498cb925-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:43.815958 master-0 kubenswrapper[26425]: I0217 15:49:43.815798 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l54lc\" (UniqueName: \"kubernetes.io/projected/40b9844a-bf3d-41db-9b63-8507498cb925-kube-api-access-l54lc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:43.905991 master-0 kubenswrapper[26425]: I0217 15:49:43.905923 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:43.906575 master-0 kubenswrapper[26425]: I0217 15:49:43.906487 26425 generic.go:334] "Generic (PLEG): container finished" podID="40b9844a-bf3d-41db-9b63-8507498cb925" containerID="f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" exitCode=0 Feb 17 15:49:43.906575 master-0 kubenswrapper[26425]: I0217 15:49:43.906543 26425 generic.go:334] "Generic (PLEG): container finished" podID="40b9844a-bf3d-41db-9b63-8507498cb925" containerID="394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" exitCode=143 Feb 17 15:49:43.906730 master-0 kubenswrapper[26425]: I0217 15:49:43.906682 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerDied","Data":"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084"} Feb 17 15:49:43.906789 master-0 kubenswrapper[26425]: I0217 15:49:43.906733 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerDied","Data":"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1"} Feb 17 15:49:43.906789 master-0 kubenswrapper[26425]: I0217 15:49:43.906750 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40b9844a-bf3d-41db-9b63-8507498cb925","Type":"ContainerDied","Data":"194fe25db6f24662cd8862b7b7ff7e155361cbb7b0a9ea5bd61572b49c57faee"} Feb 17 15:49:43.906789 master-0 kubenswrapper[26425]: I0217 15:49:43.906768 26425 scope.go:117] "RemoveContainer" containerID="f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" Feb 17 15:49:43.910563 master-0 kubenswrapper[26425]: I0217 15:49:43.910506 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 17 15:49:43.934948 master-0 kubenswrapper[26425]: I0217 15:49:43.934894 26425 scope.go:117] "RemoveContainer" containerID="394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" Feb 17 15:49:44.029589 master-0 kubenswrapper[26425]: I0217 15:49:44.029141 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.036948 26425 scope.go:117] "RemoveContainer" containerID="f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: E0217 15:49:44.037784 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084\": container with ID starting with f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084 not found: ID does not exist" containerID="f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.037837 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084"} err="failed to get container status \"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084\": rpc error: code = NotFound desc = could not find container \"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084\": container with ID starting with f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084 not found: ID does not exist" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.037876 26425 scope.go:117] "RemoveContainer" containerID="394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: E0217 15:49:44.038206 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1\": container with ID starting with 394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1 not found: ID does not exist" containerID="394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.038246 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1"} err="failed to get container status \"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1\": rpc error: code = NotFound desc = could not find container \"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1\": container with ID starting with 394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1 not found: ID does not exist" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.038269 26425 scope.go:117] "RemoveContainer" containerID="f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.038712 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084"} err="failed to get container status \"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084\": rpc error: code = NotFound desc = could not find container \"f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084\": container with ID starting with f7c936a4f8c11fc82b53da145fc6bed551984eabc98cfeb15dfb0cdbedc2c084 not found: ID does not exist" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.038755 26425 scope.go:117] "RemoveContainer" containerID="394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1" Feb 17 15:49:44.042046 master-0 kubenswrapper[26425]: I0217 15:49:44.039068 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1"} err="failed to get container status \"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1\": rpc error: code = NotFound desc = could not find container \"394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1\": container with ID starting with 394da39edfc5a16545b3796480acdb9be51a8840fe8da2340f090f33f25be2f1 not found: ID does not exist" Feb 17 15:49:44.044778 master-0 kubenswrapper[26425]: I0217 15:49:44.043298 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:44.066980 master-0 kubenswrapper[26425]: I0217 15:49:44.066908 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:44.067950 master-0 kubenswrapper[26425]: E0217 15:49:44.067904 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-log" Feb 17 15:49:44.068014 master-0 kubenswrapper[26425]: I0217 15:49:44.067949 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-log" Feb 17 15:49:44.068014 master-0 kubenswrapper[26425]: E0217 15:49:44.067993 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-metadata" Feb 17 15:49:44.068014 master-0 kubenswrapper[26425]: I0217 15:49:44.068007 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-metadata" Feb 17 15:49:44.068545 master-0 kubenswrapper[26425]: I0217 15:49:44.068507 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-log" Feb 17 15:49:44.068599 master-0 kubenswrapper[26425]: I0217 15:49:44.068571 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" containerName="nova-metadata-metadata" Feb 17 15:49:44.071060 master-0 kubenswrapper[26425]: I0217 15:49:44.071015 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:44.075010 master-0 kubenswrapper[26425]: I0217 15:49:44.074969 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 15:49:44.075807 master-0 kubenswrapper[26425]: I0217 15:49:44.075769 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 15:49:44.082365 master-0 kubenswrapper[26425]: I0217 15:49:44.082309 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:44.127670 master-0 kubenswrapper[26425]: I0217 15:49:44.124418 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.127670 master-0 kubenswrapper[26425]: I0217 15:49:44.124516 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.127670 master-0 kubenswrapper[26425]: I0217 15:49:44.124733 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.127670 master-0 kubenswrapper[26425]: I0217 15:49:44.125505 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdg8\" (UniqueName: \"kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.127670 master-0 kubenswrapper[26425]: I0217 15:49:44.125661 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.230022 master-0 kubenswrapper[26425]: I0217 15:49:44.229950 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.230249 master-0 kubenswrapper[26425]: I0217 15:49:44.230157 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.230249 master-0 kubenswrapper[26425]: I0217 15:49:44.230218 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.230980 master-0 kubenswrapper[26425]: I0217 15:49:44.230909 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.231228 master-0 kubenswrapper[26425]: I0217 15:49:44.231180 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdg8\" (UniqueName: \"kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.234114 master-0 kubenswrapper[26425]: I0217 15:49:44.234052 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.234267 master-0 kubenswrapper[26425]: I0217 15:49:44.234214 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.235107 master-0 kubenswrapper[26425]: I0217 15:49:44.235066 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.235270 master-0 kubenswrapper[26425]: I0217 15:49:44.235218 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.250398 master-0 kubenswrapper[26425]: I0217 15:49:44.250323 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdg8\" (UniqueName: \"kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8\") pod \"nova-metadata-0\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " pod="openstack/nova-metadata-0" Feb 17 15:49:44.411232 master-0 kubenswrapper[26425]: I0217 15:49:44.411160 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:44.413005 master-0 kubenswrapper[26425]: I0217 15:49:44.412932 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b9844a-bf3d-41db-9b63-8507498cb925" path="/var/lib/kubelet/pods/40b9844a-bf3d-41db-9b63-8507498cb925/volumes" Feb 17 15:49:44.934564 master-0 kubenswrapper[26425]: W0217 15:49:44.934142 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc05d31_d186_4020_bb1c_612ec6e9266d.slice/crio-207ddc30a5aca4c26911d3f25a6e8fe4c6e0b73db56c05bd39cea94bc48efbdd WatchSource:0}: Error finding container 207ddc30a5aca4c26911d3f25a6e8fe4c6e0b73db56c05bd39cea94bc48efbdd: Status 404 returned error can't find the container with id 207ddc30a5aca4c26911d3f25a6e8fe4c6e0b73db56c05bd39cea94bc48efbdd Feb 17 15:49:44.938234 master-0 kubenswrapper[26425]: I0217 15:49:44.937328 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:45.948879 master-0 kubenswrapper[26425]: I0217 15:49:45.948771 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerStarted","Data":"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68"} Feb 17 15:49:45.948879 master-0 kubenswrapper[26425]: I0217 15:49:45.948825 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerStarted","Data":"207ddc30a5aca4c26911d3f25a6e8fe4c6e0b73db56c05bd39cea94bc48efbdd"} Feb 17 15:49:46.966205 master-0 kubenswrapper[26425]: I0217 15:49:46.966138 26425 generic.go:334] "Generic (PLEG): container finished" podID="a5964ec6-84ef-4164-8701-252638ec2109" containerID="0137005ba5d60d8db1168f17cfcc284062553c4866d6b7f97c77299378990b3e" exitCode=0 Feb 17 15:49:46.966205 master-0 kubenswrapper[26425]: I0217 15:49:46.966204 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9btmx" event={"ID":"a5964ec6-84ef-4164-8701-252638ec2109","Type":"ContainerDied","Data":"0137005ba5d60d8db1168f17cfcc284062553c4866d6b7f97c77299378990b3e"} Feb 17 15:49:47.400891 master-0 kubenswrapper[26425]: I0217 15:49:47.400816 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:49:47.414861 master-0 kubenswrapper[26425]: I0217 15:49:47.414790 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:49:47.414861 master-0 kubenswrapper[26425]: I0217 15:49:47.414856 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:49:47.465497 master-0 kubenswrapper[26425]: I0217 15:49:47.462221 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 15:49:47.465497 master-0 kubenswrapper[26425]: I0217 15:49:47.462297 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 15:49:47.500483 master-0 kubenswrapper[26425]: I0217 15:49:47.499721 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 15:49:47.517483 master-0 kubenswrapper[26425]: I0217 15:49:47.517201 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:49:48.044955 master-0 kubenswrapper[26425]: I0217 15:49:48.044889 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 15:49:48.498755 master-0 kubenswrapper[26425]: I0217 15:49:48.498678 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:49:48.501427 master-0 kubenswrapper[26425]: I0217 15:49:48.498892 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:49:50.625733 master-0 kubenswrapper[26425]: I0217 15:49:50.625632 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:49:50.626371 master-0 kubenswrapper[26425]: I0217 15:49:50.625964 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="dnsmasq-dns" containerID="cri-o://5cadc85396378e7071c4b32c991e870e2e58a70e19c2efe0f7a4d45097ed21e7" gracePeriod=10 Feb 17 15:49:51.038151 master-0 kubenswrapper[26425]: I0217 15:49:51.038073 26425 generic.go:334] "Generic (PLEG): container finished" podID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerID="5cadc85396378e7071c4b32c991e870e2e58a70e19c2efe0f7a4d45097ed21e7" exitCode=0 Feb 17 15:49:51.038151 master-0 kubenswrapper[26425]: I0217 15:49:51.038123 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" event={"ID":"644f188a-ec83-482d-8c99-4da13cfc19e3","Type":"ContainerDied","Data":"5cadc85396378e7071c4b32c991e870e2e58a70e19c2efe0f7a4d45097ed21e7"} Feb 17 15:49:51.350253 master-0 kubenswrapper[26425]: I0217 15:49:51.348681 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:51.459840 master-0 kubenswrapper[26425]: I0217 15:49:51.459784 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle\") pod \"a5964ec6-84ef-4164-8701-252638ec2109\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " Feb 17 15:49:51.460082 master-0 kubenswrapper[26425]: I0217 15:49:51.459882 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts\") pod \"a5964ec6-84ef-4164-8701-252638ec2109\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " Feb 17 15:49:51.460082 master-0 kubenswrapper[26425]: I0217 15:49:51.460010 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data\") pod \"a5964ec6-84ef-4164-8701-252638ec2109\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " Feb 17 15:49:51.460643 master-0 kubenswrapper[26425]: I0217 15:49:51.460133 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2r2\" (UniqueName: \"kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2\") pod \"a5964ec6-84ef-4164-8701-252638ec2109\" (UID: \"a5964ec6-84ef-4164-8701-252638ec2109\") " Feb 17 15:49:51.555798 master-0 kubenswrapper[26425]: I0217 15:49:51.555383 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2" (OuterVolumeSpecName: "kube-api-access-bs2r2") pod "a5964ec6-84ef-4164-8701-252638ec2109" (UID: "a5964ec6-84ef-4164-8701-252638ec2109"). InnerVolumeSpecName "kube-api-access-bs2r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:51.556180 master-0 kubenswrapper[26425]: I0217 15:49:51.556119 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts" (OuterVolumeSpecName: "scripts") pod "a5964ec6-84ef-4164-8701-252638ec2109" (UID: "a5964ec6-84ef-4164-8701-252638ec2109"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:51.563057 master-0 kubenswrapper[26425]: I0217 15:49:51.563006 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2r2\" (UniqueName: \"kubernetes.io/projected/a5964ec6-84ef-4164-8701-252638ec2109-kube-api-access-bs2r2\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:51.563057 master-0 kubenswrapper[26425]: I0217 15:49:51.563050 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:51.564904 master-0 kubenswrapper[26425]: I0217 15:49:51.564853 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data" (OuterVolumeSpecName: "config-data") pod "a5964ec6-84ef-4164-8701-252638ec2109" (UID: "a5964ec6-84ef-4164-8701-252638ec2109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:51.565059 master-0 kubenswrapper[26425]: I0217 15:49:51.564977 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5964ec6-84ef-4164-8701-252638ec2109" (UID: "a5964ec6-84ef-4164-8701-252638ec2109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:51.664792 master-0 kubenswrapper[26425]: I0217 15:49:51.664731 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:51.664792 master-0 kubenswrapper[26425]: I0217 15:49:51.664766 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5964ec6-84ef-4164-8701-252638ec2109-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:51.808068 master-0 kubenswrapper[26425]: I0217 15:49:51.808026 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.868845 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmh2m\" (UniqueName: \"kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.868921 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.869116 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.869220 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.869280 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.880584 master-0 kubenswrapper[26425]: I0217 15:49:51.869418 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config\") pod \"644f188a-ec83-482d-8c99-4da13cfc19e3\" (UID: \"644f188a-ec83-482d-8c99-4da13cfc19e3\") " Feb 17 15:49:51.887342 master-0 kubenswrapper[26425]: I0217 15:49:51.885377 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m" (OuterVolumeSpecName: "kube-api-access-nmh2m") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "kube-api-access-nmh2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:51.977555 master-0 kubenswrapper[26425]: I0217 15:49:51.974595 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmh2m\" (UniqueName: \"kubernetes.io/projected/644f188a-ec83-482d-8c99-4da13cfc19e3-kube-api-access-nmh2m\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:51.999480 master-0 kubenswrapper[26425]: I0217 15:49:51.998786 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:52.003481 master-0 kubenswrapper[26425]: I0217 15:49:52.000959 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:52.085261 master-0 kubenswrapper[26425]: I0217 15:49:52.085177 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:52.085261 master-0 kubenswrapper[26425]: I0217 15:49:52.085228 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:52.095625 master-0 kubenswrapper[26425]: I0217 15:49:52.093781 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"04ef5635-e3f9-48a4-a474-2927a611d808","Type":"ContainerStarted","Data":"84a7db002ad24d9508968e68ef0dd60222a257f70d30c53bb368ab4881189d08"} Feb 17 15:49:52.095625 master-0 kubenswrapper[26425]: I0217 15:49:52.094502 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:52.100496 master-0 kubenswrapper[26425]: I0217 15:49:52.097799 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" event={"ID":"644f188a-ec83-482d-8c99-4da13cfc19e3","Type":"ContainerDied","Data":"806b82ee893ed3b8f67577203f56cf9cb8d57e73a1c9392d332c05514581de4d"} Feb 17 15:49:52.100496 master-0 kubenswrapper[26425]: I0217 15:49:52.097848 26425 scope.go:117] "RemoveContainer" containerID="5cadc85396378e7071c4b32c991e870e2e58a70e19c2efe0f7a4d45097ed21e7" Feb 17 15:49:52.100496 master-0 kubenswrapper[26425]: I0217 15:49:52.098004 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m" Feb 17 15:49:52.104620 master-0 kubenswrapper[26425]: I0217 15:49:52.103693 26425 generic.go:334] "Generic (PLEG): container finished" podID="0e34b203-c823-4193-99ca-d9d8f89c1c41" containerID="59c85c1e1341b23cfffb0a683db7dd911aa6d9778114ba9881eae9d12be587a1" exitCode=0 Feb 17 15:49:52.104620 master-0 kubenswrapper[26425]: I0217 15:49:52.103762 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" event={"ID":"0e34b203-c823-4193-99ca-d9d8f89c1c41","Type":"ContainerDied","Data":"59c85c1e1341b23cfffb0a683db7dd911aa6d9778114ba9881eae9d12be587a1"} Feb 17 15:49:52.116984 master-0 kubenswrapper[26425]: I0217 15:49:52.112179 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9btmx" event={"ID":"a5964ec6-84ef-4164-8701-252638ec2109","Type":"ContainerDied","Data":"27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f"} Feb 17 15:49:52.116984 master-0 kubenswrapper[26425]: I0217 15:49:52.112228 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27cf30dd0cd27f8835c9324fada66fe831b98cbf342f4411692b263931c5d57f" Feb 17 15:49:52.116984 master-0 kubenswrapper[26425]: I0217 15:49:52.112298 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9btmx" Feb 17 15:49:52.131082 master-0 kubenswrapper[26425]: I0217 15:49:52.126533 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:52.131082 master-0 kubenswrapper[26425]: I0217 15:49:52.130783 26425 scope.go:117] "RemoveContainer" containerID="92178e5b6355c9585ba7d5edef43f9ac99142846a8de5573fe5ce0e0658e94e3" Feb 17 15:49:52.138707 master-0 kubenswrapper[26425]: I0217 15:49:52.132268 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerStarted","Data":"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319"} Feb 17 15:49:52.138707 master-0 kubenswrapper[26425]: I0217 15:49:52.137387 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.6060106320000003 podStartE2EDuration="16.137348046s" podCreationTimestamp="2026-02-17 15:49:36 +0000 UTC" firstStartedPulling="2026-02-17 15:49:37.900538704 +0000 UTC m=+2039.792262522" lastFinishedPulling="2026-02-17 15:49:51.431876078 +0000 UTC m=+2053.323599936" observedRunningTime="2026-02-17 15:49:52.126671248 +0000 UTC m=+2054.018395076" watchObservedRunningTime="2026-02-17 15:49:52.137348046 +0000 UTC m=+2054.029071864" Feb 17 15:49:52.141725 master-0 kubenswrapper[26425]: I0217 15:49:52.141690 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 17 15:49:52.164169 master-0 kubenswrapper[26425]: I0217 15:49:52.164096 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:52.176162 master-0 kubenswrapper[26425]: I0217 15:49:52.176099 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config" (OuterVolumeSpecName: "config") pod "644f188a-ec83-482d-8c99-4da13cfc19e3" (UID: "644f188a-ec83-482d-8c99-4da13cfc19e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:49:52.190773 master-0 kubenswrapper[26425]: I0217 15:49:52.190708 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:52.190962 master-0 kubenswrapper[26425]: I0217 15:49:52.190789 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:52.190962 master-0 kubenswrapper[26425]: I0217 15:49:52.190804 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644f188a-ec83-482d-8c99-4da13cfc19e3-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:52.230469 master-0 kubenswrapper[26425]: I0217 15:49:52.230271 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.230249087 podStartE2EDuration="9.230249087s" podCreationTimestamp="2026-02-17 15:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:52.216138507 +0000 UTC m=+2054.107862345" watchObservedRunningTime="2026-02-17 15:49:52.230249087 +0000 UTC m=+2054.121972905" Feb 17 15:49:52.480759 master-0 kubenswrapper[26425]: I0217 15:49:52.480614 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:49:52.496539 master-0 kubenswrapper[26425]: I0217 15:49:52.496373 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m"] Feb 17 15:49:52.558702 master-0 kubenswrapper[26425]: I0217 15:49:52.557584 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:52.558702 master-0 kubenswrapper[26425]: I0217 15:49:52.557946 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-log" containerID="cri-o://d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229" gracePeriod=30 Feb 17 15:49:52.558702 master-0 kubenswrapper[26425]: I0217 15:49:52.558531 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-api" containerID="cri-o://8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392" gracePeriod=30 Feb 17 15:49:52.578498 master-0 kubenswrapper[26425]: I0217 15:49:52.578181 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:52.578498 master-0 kubenswrapper[26425]: I0217 15:49:52.578414 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1192e109-48bb-4d67-a347-33ca457d8368" containerName="nova-scheduler-scheduler" containerID="cri-o://1b84f340054057f5f2c0c7db209143f6bdaf6592bbbff4f4331a44c22649ad9f" gracePeriod=30 Feb 17 15:49:52.590726 master-0 kubenswrapper[26425]: I0217 15:49:52.590642 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:53.148075 master-0 kubenswrapper[26425]: I0217 15:49:53.147989 26425 generic.go:334] "Generic (PLEG): container finished" podID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerID="d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229" exitCode=143 Feb 17 15:49:53.148701 master-0 kubenswrapper[26425]: I0217 15:49:53.148139 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerDied","Data":"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229"} Feb 17 15:49:53.675478 master-0 kubenswrapper[26425]: I0217 15:49:53.675112 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:53.728399 master-0 kubenswrapper[26425]: I0217 15:49:53.728298 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data\") pod \"0e34b203-c823-4193-99ca-d9d8f89c1c41\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " Feb 17 15:49:53.728651 master-0 kubenswrapper[26425]: I0217 15:49:53.728532 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts\") pod \"0e34b203-c823-4193-99ca-d9d8f89c1c41\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " Feb 17 15:49:53.728651 master-0 kubenswrapper[26425]: I0217 15:49:53.728614 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djlb8\" (UniqueName: \"kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8\") pod \"0e34b203-c823-4193-99ca-d9d8f89c1c41\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " Feb 17 15:49:53.728752 master-0 kubenswrapper[26425]: I0217 15:49:53.728728 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle\") pod \"0e34b203-c823-4193-99ca-d9d8f89c1c41\" (UID: \"0e34b203-c823-4193-99ca-d9d8f89c1c41\") " Feb 17 15:49:53.738844 master-0 kubenswrapper[26425]: I0217 15:49:53.738710 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts" (OuterVolumeSpecName: "scripts") pod "0e34b203-c823-4193-99ca-d9d8f89c1c41" (UID: "0e34b203-c823-4193-99ca-d9d8f89c1c41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:53.738844 master-0 kubenswrapper[26425]: I0217 15:49:53.738745 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8" (OuterVolumeSpecName: "kube-api-access-djlb8") pod "0e34b203-c823-4193-99ca-d9d8f89c1c41" (UID: "0e34b203-c823-4193-99ca-d9d8f89c1c41"). InnerVolumeSpecName "kube-api-access-djlb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:53.780483 master-0 kubenswrapper[26425]: I0217 15:49:53.779440 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e34b203-c823-4193-99ca-d9d8f89c1c41" (UID: "0e34b203-c823-4193-99ca-d9d8f89c1c41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:53.784489 master-0 kubenswrapper[26425]: I0217 15:49:53.783344 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data" (OuterVolumeSpecName: "config-data") pod "0e34b203-c823-4193-99ca-d9d8f89c1c41" (UID: "0e34b203-c823-4193-99ca-d9d8f89c1c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:53.837557 master-0 kubenswrapper[26425]: I0217 15:49:53.837280 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:53.837557 master-0 kubenswrapper[26425]: I0217 15:49:53.837334 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djlb8\" (UniqueName: \"kubernetes.io/projected/0e34b203-c823-4193-99ca-d9d8f89c1c41-kube-api-access-djlb8\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:53.837557 master-0 kubenswrapper[26425]: I0217 15:49:53.837344 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:53.837557 master-0 kubenswrapper[26425]: I0217 15:49:53.837353 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e34b203-c823-4193-99ca-d9d8f89c1c41-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:54.223338 master-0 kubenswrapper[26425]: I0217 15:49:54.218785 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" Feb 17 15:49:54.223338 master-0 kubenswrapper[26425]: I0217 15:49:54.219259 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4vxwz" event={"ID":"0e34b203-c823-4193-99ca-d9d8f89c1c41","Type":"ContainerDied","Data":"251ac77deefcfe4349b97d3186b39c586efb49c1a60823368bb2bc020fb9e56b"} Feb 17 15:49:54.223338 master-0 kubenswrapper[26425]: I0217 15:49:54.219356 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251ac77deefcfe4349b97d3186b39c586efb49c1a60823368bb2bc020fb9e56b" Feb 17 15:49:54.229987 master-0 kubenswrapper[26425]: I0217 15:49:54.229925 26425 generic.go:334] "Generic (PLEG): container finished" podID="1192e109-48bb-4d67-a347-33ca457d8368" containerID="1b84f340054057f5f2c0c7db209143f6bdaf6592bbbff4f4331a44c22649ad9f" exitCode=0 Feb 17 15:49:54.230230 master-0 kubenswrapper[26425]: I0217 15:49:54.230197 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-log" containerID="cri-o://d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" gracePeriod=30 Feb 17 15:49:54.230385 master-0 kubenswrapper[26425]: I0217 15:49:54.230306 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-metadata" containerID="cri-o://2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" gracePeriod=30 Feb 17 15:49:54.230736 master-0 kubenswrapper[26425]: I0217 15:49:54.230605 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1192e109-48bb-4d67-a347-33ca457d8368","Type":"ContainerDied","Data":"1b84f340054057f5f2c0c7db209143f6bdaf6592bbbff4f4331a44c22649ad9f"} Feb 17 15:49:54.411216 master-0 kubenswrapper[26425]: I0217 15:49:54.411157 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" path="/var/lib/kubelet/pods/644f188a-ec83-482d-8c99-4da13cfc19e3/volumes" Feb 17 15:49:54.411822 master-0 kubenswrapper[26425]: I0217 15:49:54.411793 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:49:54.411822 master-0 kubenswrapper[26425]: I0217 15:49:54.411819 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:49:54.477690 master-0 kubenswrapper[26425]: I0217 15:49:54.477437 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:54.560542 master-0 kubenswrapper[26425]: I0217 15:49:54.560440 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle\") pod \"1192e109-48bb-4d67-a347-33ca457d8368\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " Feb 17 15:49:54.560767 master-0 kubenswrapper[26425]: I0217 15:49:54.560572 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdk9\" (UniqueName: \"kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9\") pod \"1192e109-48bb-4d67-a347-33ca457d8368\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " Feb 17 15:49:54.560767 master-0 kubenswrapper[26425]: I0217 15:49:54.560649 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data\") pod \"1192e109-48bb-4d67-a347-33ca457d8368\" (UID: \"1192e109-48bb-4d67-a347-33ca457d8368\") " Feb 17 15:49:54.568098 master-0 kubenswrapper[26425]: I0217 15:49:54.568007 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9" (OuterVolumeSpecName: "kube-api-access-fvdk9") pod "1192e109-48bb-4d67-a347-33ca457d8368" (UID: "1192e109-48bb-4d67-a347-33ca457d8368"). InnerVolumeSpecName "kube-api-access-fvdk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:54.624619 master-0 kubenswrapper[26425]: I0217 15:49:54.624507 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data" (OuterVolumeSpecName: "config-data") pod "1192e109-48bb-4d67-a347-33ca457d8368" (UID: "1192e109-48bb-4d67-a347-33ca457d8368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:54.625532 master-0 kubenswrapper[26425]: I0217 15:49:54.625430 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1192e109-48bb-4d67-a347-33ca457d8368" (UID: "1192e109-48bb-4d67-a347-33ca457d8368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:54.664553 master-0 kubenswrapper[26425]: I0217 15:49:54.664451 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdk9\" (UniqueName: \"kubernetes.io/projected/1192e109-48bb-4d67-a347-33ca457d8368-kube-api-access-fvdk9\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:54.664553 master-0 kubenswrapper[26425]: I0217 15:49:54.664536 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:54.664553 master-0 kubenswrapper[26425]: I0217 15:49:54.664550 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192e109-48bb-4d67-a347-33ca457d8368-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:54.889955 master-0 kubenswrapper[26425]: I0217 15:49:54.889636 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: E0217 15:49:54.890713 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e34b203-c823-4193-99ca-d9d8f89c1c41" containerName="nova-cell1-conductor-db-sync" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.890777 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e34b203-c823-4193-99ca-d9d8f89c1c41" containerName="nova-cell1-conductor-db-sync" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: E0217 15:49:54.890805 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1192e109-48bb-4d67-a347-33ca457d8368" containerName="nova-scheduler-scheduler" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.890814 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1192e109-48bb-4d67-a347-33ca457d8368" containerName="nova-scheduler-scheduler" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: E0217 15:49:54.890851 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="init" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.890861 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="init" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: E0217 15:49:54.890898 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5964ec6-84ef-4164-8701-252638ec2109" containerName="nova-manage" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.890907 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5964ec6-84ef-4164-8701-252638ec2109" containerName="nova-manage" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: E0217 15:49:54.890944 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="dnsmasq-dns" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.890952 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="dnsmasq-dns" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.891239 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e34b203-c823-4193-99ca-d9d8f89c1c41" containerName="nova-cell1-conductor-db-sync" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.891265 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5964ec6-84ef-4164-8701-252638ec2109" containerName="nova-manage" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.891304 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="644f188a-ec83-482d-8c99-4da13cfc19e3" containerName="dnsmasq-dns" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.891331 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1192e109-48bb-4d67-a347-33ca457d8368" containerName="nova-scheduler-scheduler" Feb 17 15:49:54.893773 master-0 kubenswrapper[26425]: I0217 15:49:54.893619 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:54.896373 master-0 kubenswrapper[26425]: I0217 15:49:54.895424 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 15:49:54.938854 master-0 kubenswrapper[26425]: I0217 15:49:54.938793 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 15:49:54.957764 master-0 kubenswrapper[26425]: I0217 15:49:54.957703 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:54.983428 master-0 kubenswrapper[26425]: I0217 15:49:54.982682 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjcc\" (UniqueName: \"kubernetes.io/projected/f7a865af-9e86-49f7-86d0-a96be69dec85-kube-api-access-qjjcc\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:54.983428 master-0 kubenswrapper[26425]: I0217 15:49:54.982945 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:54.983428 master-0 kubenswrapper[26425]: I0217 15:49:54.983222 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084221 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs\") pod \"edc05d31-d186-4020-bb1c-612ec6e9266d\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084282 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data\") pod \"edc05d31-d186-4020-bb1c-612ec6e9266d\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084363 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hdg8\" (UniqueName: \"kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8\") pod \"edc05d31-d186-4020-bb1c-612ec6e9266d\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084395 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle\") pod \"edc05d31-d186-4020-bb1c-612ec6e9266d\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084442 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs\") pod \"edc05d31-d186-4020-bb1c-612ec6e9266d\" (UID: \"edc05d31-d186-4020-bb1c-612ec6e9266d\") " Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.084958 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs" (OuterVolumeSpecName: "logs") pod "edc05d31-d186-4020-bb1c-612ec6e9266d" (UID: "edc05d31-d186-4020-bb1c-612ec6e9266d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.085022 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.085183 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.085269 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjcc\" (UniqueName: \"kubernetes.io/projected/f7a865af-9e86-49f7-86d0-a96be69dec85-kube-api-access-qjjcc\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.086475 master-0 kubenswrapper[26425]: I0217 15:49:55.085417 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc05d31-d186-4020-bb1c-612ec6e9266d-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:55.088507 master-0 kubenswrapper[26425]: I0217 15:49:55.088386 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8" (OuterVolumeSpecName: "kube-api-access-7hdg8") pod "edc05d31-d186-4020-bb1c-612ec6e9266d" (UID: "edc05d31-d186-4020-bb1c-612ec6e9266d"). InnerVolumeSpecName "kube-api-access-7hdg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:55.088917 master-0 kubenswrapper[26425]: I0217 15:49:55.088871 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.090582 master-0 kubenswrapper[26425]: I0217 15:49:55.090527 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a865af-9e86-49f7-86d0-a96be69dec85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.115746 master-0 kubenswrapper[26425]: I0217 15:49:55.115677 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data" (OuterVolumeSpecName: "config-data") pod "edc05d31-d186-4020-bb1c-612ec6e9266d" (UID: "edc05d31-d186-4020-bb1c-612ec6e9266d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:55.116987 master-0 kubenswrapper[26425]: I0217 15:49:55.116929 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc05d31-d186-4020-bb1c-612ec6e9266d" (UID: "edc05d31-d186-4020-bb1c-612ec6e9266d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:55.141250 master-0 kubenswrapper[26425]: I0217 15:49:55.141191 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjcc\" (UniqueName: \"kubernetes.io/projected/f7a865af-9e86-49f7-86d0-a96be69dec85-kube-api-access-qjjcc\") pod \"nova-cell1-conductor-0\" (UID: \"f7a865af-9e86-49f7-86d0-a96be69dec85\") " pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.166223 master-0 kubenswrapper[26425]: I0217 15:49:55.166122 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "edc05d31-d186-4020-bb1c-612ec6e9266d" (UID: "edc05d31-d186-4020-bb1c-612ec6e9266d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:55.188384 master-0 kubenswrapper[26425]: I0217 15:49:55.188283 26425 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:55.188384 master-0 kubenswrapper[26425]: I0217 15:49:55.188350 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:55.188384 master-0 kubenswrapper[26425]: I0217 15:49:55.188369 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hdg8\" (UniqueName: \"kubernetes.io/projected/edc05d31-d186-4020-bb1c-612ec6e9266d-kube-api-access-7hdg8\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:55.188737 master-0 kubenswrapper[26425]: I0217 15:49:55.188443 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc05d31-d186-4020-bb1c-612ec6e9266d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:55.263643 master-0 kubenswrapper[26425]: I0217 15:49:55.263552 26425 generic.go:334] "Generic (PLEG): container finished" podID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerID="2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" exitCode=0 Feb 17 15:49:55.263643 master-0 kubenswrapper[26425]: I0217 15:49:55.263610 26425 generic.go:334] "Generic (PLEG): container finished" podID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerID="d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" exitCode=143 Feb 17 15:49:55.264179 master-0 kubenswrapper[26425]: I0217 15:49:55.263703 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerDied","Data":"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319"} Feb 17 15:49:55.264179 master-0 kubenswrapper[26425]: I0217 15:49:55.263745 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerDied","Data":"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68"} Feb 17 15:49:55.264179 master-0 kubenswrapper[26425]: I0217 15:49:55.263766 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edc05d31-d186-4020-bb1c-612ec6e9266d","Type":"ContainerDied","Data":"207ddc30a5aca4c26911d3f25a6e8fe4c6e0b73db56c05bd39cea94bc48efbdd"} Feb 17 15:49:55.264179 master-0 kubenswrapper[26425]: I0217 15:49:55.263791 26425 scope.go:117] "RemoveContainer" containerID="2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" Feb 17 15:49:55.264179 master-0 kubenswrapper[26425]: I0217 15:49:55.264121 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:55.269031 master-0 kubenswrapper[26425]: I0217 15:49:55.268965 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:55.271137 master-0 kubenswrapper[26425]: I0217 15:49:55.271087 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1192e109-48bb-4d67-a347-33ca457d8368","Type":"ContainerDied","Data":"f601b30e563447c7d9de7c43b5a95e1766e2449174e3bd70825d80ba33951174"} Feb 17 15:49:55.271224 master-0 kubenswrapper[26425]: I0217 15:49:55.271199 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:55.317521 master-0 kubenswrapper[26425]: I0217 15:49:55.317440 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:55.371143 master-0 kubenswrapper[26425]: I0217 15:49:55.370827 26425 scope.go:117] "RemoveContainer" containerID="d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" Feb 17 15:49:55.391643 master-0 kubenswrapper[26425]: I0217 15:49:55.385077 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:55.410557 master-0 kubenswrapper[26425]: I0217 15:49:55.409127 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:55.411984 master-0 kubenswrapper[26425]: I0217 15:49:55.411955 26425 scope.go:117] "RemoveContainer" containerID="2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" Feb 17 15:49:55.412524 master-0 kubenswrapper[26425]: E0217 15:49:55.412449 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319\": container with ID starting with 2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319 not found: ID does not exist" containerID="2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" Feb 17 15:49:55.412603 master-0 kubenswrapper[26425]: I0217 15:49:55.412528 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319"} err="failed to get container status \"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319\": rpc error: code = NotFound desc = could not find container \"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319\": container with ID starting with 2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319 not found: ID does not exist" Feb 17 15:49:55.412603 master-0 kubenswrapper[26425]: I0217 15:49:55.412563 26425 scope.go:117] "RemoveContainer" containerID="d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" Feb 17 15:49:55.412964 master-0 kubenswrapper[26425]: E0217 15:49:55.412937 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68\": container with ID starting with d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68 not found: ID does not exist" containerID="d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" Feb 17 15:49:55.413104 master-0 kubenswrapper[26425]: I0217 15:49:55.413065 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68"} err="failed to get container status \"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68\": rpc error: code = NotFound desc = could not find container \"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68\": container with ID starting with d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68 not found: ID does not exist" Feb 17 15:49:55.413201 master-0 kubenswrapper[26425]: I0217 15:49:55.413187 26425 scope.go:117] "RemoveContainer" containerID="2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319" Feb 17 15:49:55.413617 master-0 kubenswrapper[26425]: I0217 15:49:55.413584 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319"} err="failed to get container status \"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319\": rpc error: code = NotFound desc = could not find container \"2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319\": container with ID starting with 2347ed2d0466ba2b54eee04417dc29c880a355c6eeb98bea78677fc9c8e4e319 not found: ID does not exist" Feb 17 15:49:55.413690 master-0 kubenswrapper[26425]: I0217 15:49:55.413617 26425 scope.go:117] "RemoveContainer" containerID="d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68" Feb 17 15:49:55.413916 master-0 kubenswrapper[26425]: I0217 15:49:55.413894 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68"} err="failed to get container status \"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68\": rpc error: code = NotFound desc = could not find container \"d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68\": container with ID starting with d09991b95af51a03e8cb1cf954ad0f3e76a20e86b02db469f82f6a29e9c1af68 not found: ID does not exist" Feb 17 15:49:55.414032 master-0 kubenswrapper[26425]: I0217 15:49:55.414019 26425 scope.go:117] "RemoveContainer" containerID="1b84f340054057f5f2c0c7db209143f6bdaf6592bbbff4f4331a44c22649ad9f" Feb 17 15:49:55.424433 master-0 kubenswrapper[26425]: I0217 15:49:55.424384 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:55.425147 master-0 kubenswrapper[26425]: E0217 15:49:55.425114 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-metadata" Feb 17 15:49:55.425147 master-0 kubenswrapper[26425]: I0217 15:49:55.425142 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-metadata" Feb 17 15:49:55.425245 master-0 kubenswrapper[26425]: E0217 15:49:55.425193 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-log" Feb 17 15:49:55.425245 master-0 kubenswrapper[26425]: I0217 15:49:55.425203 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-log" Feb 17 15:49:55.425544 master-0 kubenswrapper[26425]: I0217 15:49:55.425520 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-metadata" Feb 17 15:49:55.425606 master-0 kubenswrapper[26425]: I0217 15:49:55.425548 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" containerName="nova-metadata-log" Feb 17 15:49:55.429366 master-0 kubenswrapper[26425]: I0217 15:49:55.429331 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:55.430288 master-0 kubenswrapper[26425]: I0217 15:49:55.430237 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:55.435227 master-0 kubenswrapper[26425]: I0217 15:49:55.433106 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 15:49:55.435227 master-0 kubenswrapper[26425]: I0217 15:49:55.434669 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 15:49:55.444532 master-0 kubenswrapper[26425]: I0217 15:49:55.443335 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:55.451931 master-0 kubenswrapper[26425]: I0217 15:49:55.451872 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:55.453357 master-0 kubenswrapper[26425]: I0217 15:49:55.453331 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:55.457186 master-0 kubenswrapper[26425]: I0217 15:49:55.457046 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 15:49:55.534989 master-0 kubenswrapper[26425]: I0217 15:49:55.534398 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:55.600290 master-0 kubenswrapper[26425]: I0217 15:49:55.600189 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.600290 master-0 kubenswrapper[26425]: I0217 15:49:55.600287 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.600491 master-0 kubenswrapper[26425]: I0217 15:49:55.600362 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx9js\" (UniqueName: \"kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.600589 master-0 kubenswrapper[26425]: I0217 15:49:55.600534 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.600632 master-0 kubenswrapper[26425]: I0217 15:49:55.600600 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.600735 master-0 kubenswrapper[26425]: I0217 15:49:55.600691 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zgk\" (UniqueName: \"kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.600795 master-0 kubenswrapper[26425]: I0217 15:49:55.600755 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.600860 master-0 kubenswrapper[26425]: I0217 15:49:55.600842 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.704737 master-0 kubenswrapper[26425]: I0217 15:49:55.704656 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zgk\" (UniqueName: \"kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.704918 master-0 kubenswrapper[26425]: I0217 15:49:55.704869 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.705110 master-0 kubenswrapper[26425]: I0217 15:49:55.705066 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.705302 master-0 kubenswrapper[26425]: I0217 15:49:55.705258 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.705372 master-0 kubenswrapper[26425]: I0217 15:49:55.705315 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.705497 master-0 kubenswrapper[26425]: I0217 15:49:55.705439 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9js\" (UniqueName: \"kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.706659 master-0 kubenswrapper[26425]: I0217 15:49:55.706549 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.706819 master-0 kubenswrapper[26425]: I0217 15:49:55.706664 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.711603 master-0 kubenswrapper[26425]: I0217 15:49:55.711558 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.711603 master-0 kubenswrapper[26425]: I0217 15:49:55.711575 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.711989 master-0 kubenswrapper[26425]: I0217 15:49:55.711936 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.713616 master-0 kubenswrapper[26425]: I0217 15:49:55.713560 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.714274 master-0 kubenswrapper[26425]: I0217 15:49:55.714173 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.719211 master-0 kubenswrapper[26425]: I0217 15:49:55.719170 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.724732 master-0 kubenswrapper[26425]: I0217 15:49:55.724686 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zgk\" (UniqueName: \"kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk\") pod \"nova-scheduler-0\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " pod="openstack/nova-scheduler-0" Feb 17 15:49:55.730393 master-0 kubenswrapper[26425]: I0217 15:49:55.730366 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9js\" (UniqueName: \"kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js\") pod \"nova-metadata-0\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " pod="openstack/nova-metadata-0" Feb 17 15:49:55.749867 master-0 kubenswrapper[26425]: I0217 15:49:55.749808 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:49:55.769159 master-0 kubenswrapper[26425]: I0217 15:49:55.769112 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:49:55.868492 master-0 kubenswrapper[26425]: I0217 15:49:55.868219 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 15:49:55.876069 master-0 kubenswrapper[26425]: W0217 15:49:55.876001 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a865af_9e86_49f7_86d0_a96be69dec85.slice/crio-bc10ca0602f58ec01b1377a7e9fb9d77a7150788a37331e2850f19d32cf876ae WatchSource:0}: Error finding container bc10ca0602f58ec01b1377a7e9fb9d77a7150788a37331e2850f19d32cf876ae: Status 404 returned error can't find the container with id bc10ca0602f58ec01b1377a7e9fb9d77a7150788a37331e2850f19d32cf876ae Feb 17 15:49:56.229728 master-0 kubenswrapper[26425]: I0217 15:49:56.229669 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:56.288356 master-0 kubenswrapper[26425]: I0217 15:49:56.288291 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7a865af-9e86-49f7-86d0-a96be69dec85","Type":"ContainerStarted","Data":"bc10ca0602f58ec01b1377a7e9fb9d77a7150788a37331e2850f19d32cf876ae"} Feb 17 15:49:56.290346 master-0 kubenswrapper[26425]: I0217 15:49:56.290310 26425 generic.go:334] "Generic (PLEG): container finished" podID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerID="8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392" exitCode=0 Feb 17 15:49:56.290346 master-0 kubenswrapper[26425]: I0217 15:49:56.290362 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerDied","Data":"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392"} Feb 17 15:49:56.290580 master-0 kubenswrapper[26425]: I0217 15:49:56.290384 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ad573e8-9c29-4564-b2ea-2c75467ff750","Type":"ContainerDied","Data":"47b4994a346f74950a88465427ca52b22a09ceac6910c45ddbdc885939e9627e"} Feb 17 15:49:56.290580 master-0 kubenswrapper[26425]: I0217 15:49:56.290403 26425 scope.go:117] "RemoveContainer" containerID="8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392" Feb 17 15:49:56.290580 master-0 kubenswrapper[26425]: I0217 15:49:56.290543 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:56.313008 master-0 kubenswrapper[26425]: I0217 15:49:56.312942 26425 scope.go:117] "RemoveContainer" containerID="d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229" Feb 17 15:49:56.330824 master-0 kubenswrapper[26425]: I0217 15:49:56.330756 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hss2\" (UniqueName: \"kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2\") pod \"8ad573e8-9c29-4564-b2ea-2c75467ff750\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " Feb 17 15:49:56.331021 master-0 kubenswrapper[26425]: I0217 15:49:56.331001 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle\") pod \"8ad573e8-9c29-4564-b2ea-2c75467ff750\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " Feb 17 15:49:56.331116 master-0 kubenswrapper[26425]: I0217 15:49:56.331078 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs\") pod \"8ad573e8-9c29-4564-b2ea-2c75467ff750\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " Feb 17 15:49:56.331177 master-0 kubenswrapper[26425]: I0217 15:49:56.331120 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data\") pod \"8ad573e8-9c29-4564-b2ea-2c75467ff750\" (UID: \"8ad573e8-9c29-4564-b2ea-2c75467ff750\") " Feb 17 15:49:56.332236 master-0 kubenswrapper[26425]: I0217 15:49:56.332170 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs" (OuterVolumeSpecName: "logs") pod "8ad573e8-9c29-4564-b2ea-2c75467ff750" (UID: "8ad573e8-9c29-4564-b2ea-2c75467ff750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:49:56.335479 master-0 kubenswrapper[26425]: I0217 15:49:56.335414 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2" (OuterVolumeSpecName: "kube-api-access-6hss2") pod "8ad573e8-9c29-4564-b2ea-2c75467ff750" (UID: "8ad573e8-9c29-4564-b2ea-2c75467ff750"). InnerVolumeSpecName "kube-api-access-6hss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:49:56.348752 master-0 kubenswrapper[26425]: I0217 15:49:56.348709 26425 scope.go:117] "RemoveContainer" containerID="8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392" Feb 17 15:49:56.351196 master-0 kubenswrapper[26425]: E0217 15:49:56.350869 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392\": container with ID starting with 8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392 not found: ID does not exist" containerID="8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392" Feb 17 15:49:56.351196 master-0 kubenswrapper[26425]: I0217 15:49:56.350908 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392"} err="failed to get container status \"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392\": rpc error: code = NotFound desc = could not find container \"8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392\": container with ID starting with 8bce62ef7063b7aeb5f260c70ccf751511f855e17a05739162958f01667d0392 not found: ID does not exist" Feb 17 15:49:56.351196 master-0 kubenswrapper[26425]: I0217 15:49:56.350930 26425 scope.go:117] "RemoveContainer" containerID="d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229" Feb 17 15:49:56.352047 master-0 kubenswrapper[26425]: E0217 15:49:56.352003 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229\": container with ID starting with d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229 not found: ID does not exist" containerID="d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229" Feb 17 15:49:56.352047 master-0 kubenswrapper[26425]: I0217 15:49:56.352036 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229"} err="failed to get container status \"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229\": rpc error: code = NotFound desc = could not find container \"d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229\": container with ID starting with d3c3325609f39969e133f7b53bf2ab23521e96c739f1f3a578ef14799fe77229 not found: ID does not exist" Feb 17 15:49:56.385496 master-0 kubenswrapper[26425]: I0217 15:49:56.385210 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ad573e8-9c29-4564-b2ea-2c75467ff750" (UID: "8ad573e8-9c29-4564-b2ea-2c75467ff750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:56.405486 master-0 kubenswrapper[26425]: I0217 15:49:56.393955 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data" (OuterVolumeSpecName: "config-data") pod "8ad573e8-9c29-4564-b2ea-2c75467ff750" (UID: "8ad573e8-9c29-4564-b2ea-2c75467ff750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:49:56.413526 master-0 kubenswrapper[26425]: I0217 15:49:56.412987 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1192e109-48bb-4d67-a347-33ca457d8368" path="/var/lib/kubelet/pods/1192e109-48bb-4d67-a347-33ca457d8368/volumes" Feb 17 15:49:56.413844 master-0 kubenswrapper[26425]: I0217 15:49:56.413810 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc05d31-d186-4020-bb1c-612ec6e9266d" path="/var/lib/kubelet/pods/edc05d31-d186-4020-bb1c-612ec6e9266d/volumes" Feb 17 15:49:56.434802 master-0 kubenswrapper[26425]: I0217 15:49:56.434525 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:56.434802 master-0 kubenswrapper[26425]: I0217 15:49:56.434787 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ad573e8-9c29-4564-b2ea-2c75467ff750-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:56.434802 master-0 kubenswrapper[26425]: I0217 15:49:56.434802 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad573e8-9c29-4564-b2ea-2c75467ff750-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:56.436527 master-0 kubenswrapper[26425]: I0217 15:49:56.434811 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hss2\" (UniqueName: \"kubernetes.io/projected/8ad573e8-9c29-4564-b2ea-2c75467ff750-kube-api-access-6hss2\") on node \"master-0\" DevicePath \"\"" Feb 17 15:49:56.809925 master-0 kubenswrapper[26425]: I0217 15:49:56.809843 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:49:57.031051 master-0 kubenswrapper[26425]: I0217 15:49:57.016929 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:57.033622 master-0 kubenswrapper[26425]: W0217 15:49:57.031496 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3b7ad4_549f_4608_8119_6be98f4eace1.slice/crio-c0d73a9e5459b2e185df40bea44b6997ffbb0020458cf2fb4f51c91f7433d00d WatchSource:0}: Error finding container c0d73a9e5459b2e185df40bea44b6997ffbb0020458cf2fb4f51c91f7433d00d: Status 404 returned error can't find the container with id c0d73a9e5459b2e185df40bea44b6997ffbb0020458cf2fb4f51c91f7433d00d Feb 17 15:49:57.035232 master-0 kubenswrapper[26425]: I0217 15:49:57.035190 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:49:57.053906 master-0 kubenswrapper[26425]: I0217 15:49:57.053829 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:57.296249 master-0 kubenswrapper[26425]: I0217 15:49:57.296160 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:57.297276 master-0 kubenswrapper[26425]: E0217 15:49:57.297077 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-api" Feb 17 15:49:57.297276 master-0 kubenswrapper[26425]: I0217 15:49:57.297107 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-api" Feb 17 15:49:57.297276 master-0 kubenswrapper[26425]: E0217 15:49:57.297126 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-log" Feb 17 15:49:57.297276 master-0 kubenswrapper[26425]: I0217 15:49:57.297135 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-log" Feb 17 15:49:57.297714 master-0 kubenswrapper[26425]: I0217 15:49:57.297553 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-api" Feb 17 15:49:57.297714 master-0 kubenswrapper[26425]: I0217 15:49:57.297619 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" containerName="nova-api-log" Feb 17 15:49:57.300216 master-0 kubenswrapper[26425]: I0217 15:49:57.300163 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:57.303201 master-0 kubenswrapper[26425]: I0217 15:49:57.303127 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 15:49:57.310876 master-0 kubenswrapper[26425]: I0217 15:49:57.310745 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3b7ad4-549f-4608-8119-6be98f4eace1","Type":"ContainerStarted","Data":"c0d73a9e5459b2e185df40bea44b6997ffbb0020458cf2fb4f51c91f7433d00d"} Feb 17 15:49:57.312739 master-0 kubenswrapper[26425]: I0217 15:49:57.312693 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerStarted","Data":"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b"} Feb 17 15:49:57.312739 master-0 kubenswrapper[26425]: I0217 15:49:57.312721 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerStarted","Data":"06e156219343fc77ff0b98b540fe89c039ad7983c0105e60205913bcfddf6c23"} Feb 17 15:49:57.314732 master-0 kubenswrapper[26425]: I0217 15:49:57.314685 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7a865af-9e86-49f7-86d0-a96be69dec85","Type":"ContainerStarted","Data":"539f8e2c4402fad126fe5c8258b514992293e7e12884913cc4d9d2d7ab103667"} Feb 17 15:49:57.314884 master-0 kubenswrapper[26425]: I0217 15:49:57.314851 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 15:49:57.350802 master-0 kubenswrapper[26425]: I0217 15:49:57.349609 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:57.477271 master-0 kubenswrapper[26425]: I0217 15:49:57.477172 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.477673 master-0 kubenswrapper[26425]: I0217 15:49:57.477614 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.478034 master-0 kubenswrapper[26425]: I0217 15:49:57.477981 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jz4\" (UniqueName: \"kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.478540 master-0 kubenswrapper[26425]: I0217 15:49:57.478145 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.481223 master-0 kubenswrapper[26425]: I0217 15:49:57.481172 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.481159678 podStartE2EDuration="3.481159678s" podCreationTimestamp="2026-02-17 15:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:57.470621933 +0000 UTC m=+2059.362345771" watchObservedRunningTime="2026-02-17 15:49:57.481159678 +0000 UTC m=+2059.372883496" Feb 17 15:49:57.580561 master-0 kubenswrapper[26425]: I0217 15:49:57.580355 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.580787 master-0 kubenswrapper[26425]: I0217 15:49:57.580718 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.581500 master-0 kubenswrapper[26425]: I0217 15:49:57.581166 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jz4\" (UniqueName: \"kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.581500 master-0 kubenswrapper[26425]: I0217 15:49:57.581338 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.581500 master-0 kubenswrapper[26425]: I0217 15:49:57.581339 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.586152 master-0 kubenswrapper[26425]: I0217 15:49:57.586101 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.586262 master-0 kubenswrapper[26425]: I0217 15:49:57.586166 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.604893 master-0 kubenswrapper[26425]: I0217 15:49:57.604831 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jz4\" (UniqueName: \"kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4\") pod \"nova-api-0\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " pod="openstack/nova-api-0" Feb 17 15:49:57.621116 master-0 kubenswrapper[26425]: I0217 15:49:57.621059 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:49:58.102439 master-0 kubenswrapper[26425]: I0217 15:49:58.102357 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:49:58.115239 master-0 kubenswrapper[26425]: W0217 15:49:58.115164 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f535cd_248a_48c9_a388_5b574dd3db17.slice/crio-5d492f4cc3efab59b39fcfb8a5ed6879eabcfec019f4e6e10315cb8fac3299fb WatchSource:0}: Error finding container 5d492f4cc3efab59b39fcfb8a5ed6879eabcfec019f4e6e10315cb8fac3299fb: Status 404 returned error can't find the container with id 5d492f4cc3efab59b39fcfb8a5ed6879eabcfec019f4e6e10315cb8fac3299fb Feb 17 15:49:58.354071 master-0 kubenswrapper[26425]: I0217 15:49:58.353928 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3b7ad4-549f-4608-8119-6be98f4eace1","Type":"ContainerStarted","Data":"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b"} Feb 17 15:49:58.358573 master-0 kubenswrapper[26425]: I0217 15:49:58.357044 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerStarted","Data":"5d492f4cc3efab59b39fcfb8a5ed6879eabcfec019f4e6e10315cb8fac3299fb"} Feb 17 15:49:58.361253 master-0 kubenswrapper[26425]: I0217 15:49:58.361212 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerStarted","Data":"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385"} Feb 17 15:49:58.415549 master-0 kubenswrapper[26425]: I0217 15:49:58.415437 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad573e8-9c29-4564-b2ea-2c75467ff750" path="/var/lib/kubelet/pods/8ad573e8-9c29-4564-b2ea-2c75467ff750/volumes" Feb 17 15:49:58.445606 master-0 kubenswrapper[26425]: I0217 15:49:58.445523 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.44550109 podStartE2EDuration="3.44550109s" podCreationTimestamp="2026-02-17 15:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:58.430844727 +0000 UTC m=+2060.322568555" watchObservedRunningTime="2026-02-17 15:49:58.44550109 +0000 UTC m=+2060.337224948" Feb 17 15:49:58.478304 master-0 kubenswrapper[26425]: I0217 15:49:58.478212 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.47818769 podStartE2EDuration="3.47818769s" podCreationTimestamp="2026-02-17 15:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:58.471783925 +0000 UTC m=+2060.363507773" watchObservedRunningTime="2026-02-17 15:49:58.47818769 +0000 UTC m=+2060.369911518" Feb 17 15:49:59.384308 master-0 kubenswrapper[26425]: I0217 15:49:59.384230 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerStarted","Data":"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a"} Feb 17 15:49:59.384308 master-0 kubenswrapper[26425]: I0217 15:49:59.384299 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerStarted","Data":"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420"} Feb 17 15:49:59.432183 master-0 kubenswrapper[26425]: I0217 15:49:59.424449 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.424418536 podStartE2EDuration="2.424418536s" podCreationTimestamp="2026-02-17 15:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:49:59.405420287 +0000 UTC m=+2061.297144115" watchObservedRunningTime="2026-02-17 15:49:59.424418536 +0000 UTC m=+2061.316142394" Feb 17 15:50:00.750202 master-0 kubenswrapper[26425]: I0217 15:50:00.750143 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:50:00.750202 master-0 kubenswrapper[26425]: I0217 15:50:00.750213 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:50:00.770629 master-0 kubenswrapper[26425]: I0217 15:50:00.770570 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 15:50:05.300805 master-0 kubenswrapper[26425]: I0217 15:50:05.300740 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 15:50:05.750824 master-0 kubenswrapper[26425]: I0217 15:50:05.750756 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 15:50:05.750824 master-0 kubenswrapper[26425]: I0217 15:50:05.750832 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 15:50:05.771086 master-0 kubenswrapper[26425]: I0217 15:50:05.770851 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 15:50:05.813047 master-0 kubenswrapper[26425]: I0217 15:50:05.812993 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 15:50:06.544684 master-0 kubenswrapper[26425]: I0217 15:50:06.544640 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 15:50:06.765756 master-0 kubenswrapper[26425]: I0217 15:50:06.765676 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:06.765962 master-0 kubenswrapper[26425]: I0217 15:50:06.765678 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:07.622239 master-0 kubenswrapper[26425]: I0217 15:50:07.622144 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:50:07.622239 master-0 kubenswrapper[26425]: I0217 15:50:07.622245 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:50:08.704851 master-0 kubenswrapper[26425]: I0217 15:50:08.704731 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.16:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:08.708446 master-0 kubenswrapper[26425]: I0217 15:50:08.705104 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.16:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:13.454189 master-0 kubenswrapper[26425]: I0217 15:50:13.454093 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.536599 master-0 kubenswrapper[26425]: I0217 15:50:13.536513 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data\") pod \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " Feb 17 15:50:13.536821 master-0 kubenswrapper[26425]: I0217 15:50:13.536651 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpvf\" (UniqueName: \"kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf\") pod \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " Feb 17 15:50:13.536821 master-0 kubenswrapper[26425]: I0217 15:50:13.536771 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle\") pod \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\" (UID: \"9f1003dc-30ca-4cd8-9489-c37262a5f45e\") " Feb 17 15:50:13.540240 master-0 kubenswrapper[26425]: I0217 15:50:13.540182 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf" (OuterVolumeSpecName: "kube-api-access-vwpvf") pod "9f1003dc-30ca-4cd8-9489-c37262a5f45e" (UID: "9f1003dc-30ca-4cd8-9489-c37262a5f45e"). InnerVolumeSpecName "kube-api-access-vwpvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:13.567033 master-0 kubenswrapper[26425]: I0217 15:50:13.566936 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data" (OuterVolumeSpecName: "config-data") pod "9f1003dc-30ca-4cd8-9489-c37262a5f45e" (UID: "9f1003dc-30ca-4cd8-9489-c37262a5f45e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:13.569061 master-0 kubenswrapper[26425]: I0217 15:50:13.569029 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f1003dc-30ca-4cd8-9489-c37262a5f45e" (UID: "9f1003dc-30ca-4cd8-9489-c37262a5f45e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:13.596958 master-0 kubenswrapper[26425]: I0217 15:50:13.596888 26425 generic.go:334] "Generic (PLEG): container finished" podID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" containerID="28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c" exitCode=137 Feb 17 15:50:13.596958 master-0 kubenswrapper[26425]: I0217 15:50:13.596955 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f1003dc-30ca-4cd8-9489-c37262a5f45e","Type":"ContainerDied","Data":"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c"} Feb 17 15:50:13.597269 master-0 kubenswrapper[26425]: I0217 15:50:13.596984 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.597269 master-0 kubenswrapper[26425]: I0217 15:50:13.597003 26425 scope.go:117] "RemoveContainer" containerID="28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c" Feb 17 15:50:13.597269 master-0 kubenswrapper[26425]: I0217 15:50:13.596990 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9f1003dc-30ca-4cd8-9489-c37262a5f45e","Type":"ContainerDied","Data":"95929ff144f08112eef61dcf4eb00cf2b61ba4630ea152364bb75c434594b156"} Feb 17 15:50:13.640502 master-0 kubenswrapper[26425]: I0217 15:50:13.640423 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:13.640502 master-0 kubenswrapper[26425]: I0217 15:50:13.640494 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwpvf\" (UniqueName: \"kubernetes.io/projected/9f1003dc-30ca-4cd8-9489-c37262a5f45e-kube-api-access-vwpvf\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:13.640789 master-0 kubenswrapper[26425]: I0217 15:50:13.640513 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1003dc-30ca-4cd8-9489-c37262a5f45e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:13.664104 master-0 kubenswrapper[26425]: I0217 15:50:13.663948 26425 scope.go:117] "RemoveContainer" containerID="28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c" Feb 17 15:50:13.664502 master-0 kubenswrapper[26425]: E0217 15:50:13.664466 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c\": container with ID starting with 28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c not found: ID does not exist" containerID="28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c" Feb 17 15:50:13.664575 master-0 kubenswrapper[26425]: I0217 15:50:13.664514 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c"} err="failed to get container status \"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c\": rpc error: code = NotFound desc = could not find container \"28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c\": container with ID starting with 28f4ef24e165b5b4e6c5a50daf6d19006c0c4e7e4e77151382f16e6ebff52e8c not found: ID does not exist" Feb 17 15:50:13.684211 master-0 kubenswrapper[26425]: I0217 15:50:13.684132 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:50:13.697167 master-0 kubenswrapper[26425]: I0217 15:50:13.697107 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:50:13.723797 master-0 kubenswrapper[26425]: I0217 15:50:13.723744 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:50:13.724431 master-0 kubenswrapper[26425]: E0217 15:50:13.724396 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 15:50:13.724431 master-0 kubenswrapper[26425]: I0217 15:50:13.724425 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 15:50:13.724806 master-0 kubenswrapper[26425]: I0217 15:50:13.724771 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 15:50:13.725617 master-0 kubenswrapper[26425]: I0217 15:50:13.725584 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.730241 master-0 kubenswrapper[26425]: I0217 15:50:13.730201 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 15:50:13.730424 master-0 kubenswrapper[26425]: I0217 15:50:13.730379 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 15:50:13.730591 master-0 kubenswrapper[26425]: I0217 15:50:13.730382 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 15:50:13.759555 master-0 kubenswrapper[26425]: I0217 15:50:13.749868 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:50:13.853949 master-0 kubenswrapper[26425]: I0217 15:50:13.853808 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.853949 master-0 kubenswrapper[26425]: I0217 15:50:13.853916 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclv4\" (UniqueName: \"kubernetes.io/projected/97883db2-7f1f-42b9-8e1c-61ea39a40173-kube-api-access-dclv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.854634 master-0 kubenswrapper[26425]: I0217 15:50:13.853980 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.854634 master-0 kubenswrapper[26425]: I0217 15:50:13.854281 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.854634 master-0 kubenswrapper[26425]: I0217 15:50:13.854415 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.956960 master-0 kubenswrapper[26425]: I0217 15:50:13.956867 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.957233 master-0 kubenswrapper[26425]: I0217 15:50:13.956996 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.957289 master-0 kubenswrapper[26425]: I0217 15:50:13.957242 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.958080 master-0 kubenswrapper[26425]: I0217 15:50:13.957968 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclv4\" (UniqueName: \"kubernetes.io/projected/97883db2-7f1f-42b9-8e1c-61ea39a40173-kube-api-access-dclv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.958291 master-0 kubenswrapper[26425]: I0217 15:50:13.958244 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.961319 master-0 kubenswrapper[26425]: I0217 15:50:13.961276 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.961868 master-0 kubenswrapper[26425]: I0217 15:50:13.961826 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.962348 master-0 kubenswrapper[26425]: I0217 15:50:13.962299 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.962436 master-0 kubenswrapper[26425]: I0217 15:50:13.962305 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97883db2-7f1f-42b9-8e1c-61ea39a40173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:13.974080 master-0 kubenswrapper[26425]: I0217 15:50:13.974037 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclv4\" (UniqueName: \"kubernetes.io/projected/97883db2-7f1f-42b9-8e1c-61ea39a40173-kube-api-access-dclv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"97883db2-7f1f-42b9-8e1c-61ea39a40173\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:14.073888 master-0 kubenswrapper[26425]: I0217 15:50:14.073630 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:14.410504 master-0 kubenswrapper[26425]: I0217 15:50:14.410361 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1003dc-30ca-4cd8-9489-c37262a5f45e" path="/var/lib/kubelet/pods/9f1003dc-30ca-4cd8-9489-c37262a5f45e/volumes" Feb 17 15:50:14.587084 master-0 kubenswrapper[26425]: I0217 15:50:14.587026 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 15:50:14.589693 master-0 kubenswrapper[26425]: W0217 15:50:14.589599 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97883db2_7f1f_42b9_8e1c_61ea39a40173.slice/crio-3d8c5e5adaf2fec554f3da95fa2b39ce4092f6f0cce365a56f2de666e7aa7681 WatchSource:0}: Error finding container 3d8c5e5adaf2fec554f3da95fa2b39ce4092f6f0cce365a56f2de666e7aa7681: Status 404 returned error can't find the container with id 3d8c5e5adaf2fec554f3da95fa2b39ce4092f6f0cce365a56f2de666e7aa7681 Feb 17 15:50:14.614221 master-0 kubenswrapper[26425]: I0217 15:50:14.614146 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"97883db2-7f1f-42b9-8e1c-61ea39a40173","Type":"ContainerStarted","Data":"3d8c5e5adaf2fec554f3da95fa2b39ce4092f6f0cce365a56f2de666e7aa7681"} Feb 17 15:50:15.630260 master-0 kubenswrapper[26425]: I0217 15:50:15.630183 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"97883db2-7f1f-42b9-8e1c-61ea39a40173","Type":"ContainerStarted","Data":"2f912bfbbfd77388cf3fb618954e09125fd21533bb0420310cbdd3835058a1b9"} Feb 17 15:50:15.664105 master-0 kubenswrapper[26425]: I0217 15:50:15.663205 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.663180081 podStartE2EDuration="2.663180081s" podCreationTimestamp="2026-02-17 15:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:15.657145375 +0000 UTC m=+2077.548869223" watchObservedRunningTime="2026-02-17 15:50:15.663180081 +0000 UTC m=+2077.554903909" Feb 17 15:50:15.755899 master-0 kubenswrapper[26425]: I0217 15:50:15.755819 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 15:50:15.760390 master-0 kubenswrapper[26425]: I0217 15:50:15.760343 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 15:50:15.773273 master-0 kubenswrapper[26425]: I0217 15:50:15.773207 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 15:50:16.654006 master-0 kubenswrapper[26425]: I0217 15:50:16.653930 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 15:50:17.626708 master-0 kubenswrapper[26425]: I0217 15:50:17.626645 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 15:50:17.629774 master-0 kubenswrapper[26425]: I0217 15:50:17.627115 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 15:50:17.629774 master-0 kubenswrapper[26425]: I0217 15:50:17.627318 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 15:50:17.633945 master-0 kubenswrapper[26425]: I0217 15:50:17.633892 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 15:50:17.665507 master-0 kubenswrapper[26425]: I0217 15:50:17.665262 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 15:50:17.670078 master-0 kubenswrapper[26425]: I0217 15:50:17.670024 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 15:50:17.946514 master-0 kubenswrapper[26425]: I0217 15:50:17.946343 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-f78pp"] Feb 17 15:50:17.955365 master-0 kubenswrapper[26425]: I0217 15:50:17.955275 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:17.959018 master-0 kubenswrapper[26425]: I0217 15:50:17.958527 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-f78pp"] Feb 17 15:50:18.014347 master-0 kubenswrapper[26425]: I0217 15:50:18.014286 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2j5\" (UniqueName: \"kubernetes.io/projected/451cd971-4656-459a-be75-185c7ecb97e1-kube-api-access-wd2j5\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.014579 master-0 kubenswrapper[26425]: I0217 15:50:18.014357 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.014579 master-0 kubenswrapper[26425]: I0217 15:50:18.014396 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-svc\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.014579 master-0 kubenswrapper[26425]: I0217 15:50:18.014417 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.015388 master-0 kubenswrapper[26425]: I0217 15:50:18.014612 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.015388 master-0 kubenswrapper[26425]: I0217 15:50:18.014744 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-config\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116115 master-0 kubenswrapper[26425]: I0217 15:50:18.116047 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-svc\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116115 master-0 kubenswrapper[26425]: I0217 15:50:18.116119 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116411 master-0 kubenswrapper[26425]: I0217 15:50:18.116187 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116411 master-0 kubenswrapper[26425]: I0217 15:50:18.116245 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-config\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116538 master-0 kubenswrapper[26425]: I0217 15:50:18.116421 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2j5\" (UniqueName: \"kubernetes.io/projected/451cd971-4656-459a-be75-185c7ecb97e1-kube-api-access-wd2j5\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.116538 master-0 kubenswrapper[26425]: I0217 15:50:18.116490 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.117148 master-0 kubenswrapper[26425]: I0217 15:50:18.117099 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.117816 master-0 kubenswrapper[26425]: I0217 15:50:18.117739 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-svc\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.118294 master-0 kubenswrapper[26425]: I0217 15:50:18.118152 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.119170 master-0 kubenswrapper[26425]: I0217 15:50:18.119129 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.119574 master-0 kubenswrapper[26425]: I0217 15:50:18.119540 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451cd971-4656-459a-be75-185c7ecb97e1-config\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.133386 master-0 kubenswrapper[26425]: I0217 15:50:18.133329 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2j5\" (UniqueName: \"kubernetes.io/projected/451cd971-4656-459a-be75-185c7ecb97e1-kube-api-access-wd2j5\") pod \"dnsmasq-dns-8f95c8447-f78pp\" (UID: \"451cd971-4656-459a-be75-185c7ecb97e1\") " pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.288260 master-0 kubenswrapper[26425]: I0217 15:50:18.288118 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:18.846476 master-0 kubenswrapper[26425]: W0217 15:50:18.842783 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod451cd971_4656_459a_be75_185c7ecb97e1.slice/crio-54a65999cfe7f2088607a8b4ceccb8862f0cc9673452a4135a93834598089766 WatchSource:0}: Error finding container 54a65999cfe7f2088607a8b4ceccb8862f0cc9673452a4135a93834598089766: Status 404 returned error can't find the container with id 54a65999cfe7f2088607a8b4ceccb8862f0cc9673452a4135a93834598089766 Feb 17 15:50:18.855476 master-0 kubenswrapper[26425]: I0217 15:50:18.852092 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-f78pp"] Feb 17 15:50:19.074884 master-0 kubenswrapper[26425]: I0217 15:50:19.074819 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:19.703267 master-0 kubenswrapper[26425]: I0217 15:50:19.703172 26425 generic.go:334] "Generic (PLEG): container finished" podID="451cd971-4656-459a-be75-185c7ecb97e1" containerID="12bfc5f530f4469d6e3b20e2535abea9285e0325ef204d5c2d6e576c03c58044" exitCode=0 Feb 17 15:50:19.703267 master-0 kubenswrapper[26425]: I0217 15:50:19.703234 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" event={"ID":"451cd971-4656-459a-be75-185c7ecb97e1","Type":"ContainerDied","Data":"12bfc5f530f4469d6e3b20e2535abea9285e0325ef204d5c2d6e576c03c58044"} Feb 17 15:50:19.703267 master-0 kubenswrapper[26425]: I0217 15:50:19.703268 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" event={"ID":"451cd971-4656-459a-be75-185c7ecb97e1","Type":"ContainerStarted","Data":"54a65999cfe7f2088607a8b4ceccb8862f0cc9673452a4135a93834598089766"} Feb 17 15:50:20.715639 master-0 kubenswrapper[26425]: I0217 15:50:20.715562 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" event={"ID":"451cd971-4656-459a-be75-185c7ecb97e1","Type":"ContainerStarted","Data":"8429dffd0533164e34a22396d946e1a342c71ce20db3cf5a6cc9a233cd6a3dee"} Feb 17 15:50:20.716499 master-0 kubenswrapper[26425]: I0217 15:50:20.715820 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:20.863348 master-0 kubenswrapper[26425]: I0217 15:50:20.863184 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" podStartSLOduration=3.863158393 podStartE2EDuration="3.863158393s" podCreationTimestamp="2026-02-17 15:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:20.858939221 +0000 UTC m=+2082.750663079" watchObservedRunningTime="2026-02-17 15:50:20.863158393 +0000 UTC m=+2082.754882221" Feb 17 15:50:21.658099 master-0 kubenswrapper[26425]: I0217 15:50:21.655506 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:21.658099 master-0 kubenswrapper[26425]: I0217 15:50:21.655820 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-log" containerID="cri-o://243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420" gracePeriod=30 Feb 17 15:50:21.658099 master-0 kubenswrapper[26425]: I0217 15:50:21.655989 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-api" containerID="cri-o://71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a" gracePeriod=30 Feb 17 15:50:22.750050 master-0 kubenswrapper[26425]: I0217 15:50:22.749892 26425 generic.go:334] "Generic (PLEG): container finished" podID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerID="243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420" exitCode=143 Feb 17 15:50:22.750050 master-0 kubenswrapper[26425]: I0217 15:50:22.749959 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerDied","Data":"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420"} Feb 17 15:50:24.075082 master-0 kubenswrapper[26425]: I0217 15:50:24.075009 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:24.106689 master-0 kubenswrapper[26425]: I0217 15:50:24.106601 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:24.802395 master-0 kubenswrapper[26425]: I0217 15:50:24.802270 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 15:50:25.076706 master-0 kubenswrapper[26425]: I0217 15:50:25.076080 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5x59m"] Feb 17 15:50:25.078315 master-0 kubenswrapper[26425]: I0217 15:50:25.078262 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.080595 master-0 kubenswrapper[26425]: I0217 15:50:25.080398 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 15:50:25.080755 master-0 kubenswrapper[26425]: I0217 15:50:25.080691 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 15:50:25.086999 master-0 kubenswrapper[26425]: I0217 15:50:25.086942 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-7vrrr"] Feb 17 15:50:25.090047 master-0 kubenswrapper[26425]: I0217 15:50:25.089296 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.100571 master-0 kubenswrapper[26425]: I0217 15:50:25.099934 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x59m"] Feb 17 15:50:25.116634 master-0 kubenswrapper[26425]: I0217 15:50:25.112807 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-7vrrr"] Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121182 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq74p\" (UniqueName: \"kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121344 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121401 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb95k\" (UniqueName: \"kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121614 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121809 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121828 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.121878 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.122768 master-0 kubenswrapper[26425]: I0217 15:50:25.122072 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223416 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb95k\" (UniqueName: \"kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223535 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223563 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223614 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223635 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.223663 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: E0217 15:50:25.223887 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: E0217 15:50:25.223905 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: E0217 15:50:25.224068 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:52:27.224051274 +0000 UTC m=+2209.115775092 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.225117 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.225174 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq74p\" (UniqueName: \"kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.226177 master-0 kubenswrapper[26425]: I0217 15:50:25.225223 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.228723 master-0 kubenswrapper[26425]: I0217 15:50:25.228678 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.237549 master-0 kubenswrapper[26425]: I0217 15:50:25.237311 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.237549 master-0 kubenswrapper[26425]: I0217 15:50:25.237350 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.237549 master-0 kubenswrapper[26425]: I0217 15:50:25.237317 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.238019 master-0 kubenswrapper[26425]: I0217 15:50:25.237872 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.238019 master-0 kubenswrapper[26425]: I0217 15:50:25.237916 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.240794 master-0 kubenswrapper[26425]: I0217 15:50:25.240758 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb95k\" (UniqueName: \"kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k\") pod \"nova-cell1-host-discover-7vrrr\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.242155 master-0 kubenswrapper[26425]: I0217 15:50:25.242091 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq74p\" (UniqueName: \"kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p\") pod \"nova-cell1-cell-mapping-5x59m\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.397582 master-0 kubenswrapper[26425]: I0217 15:50:25.397543 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:25.460977 master-0 kubenswrapper[26425]: I0217 15:50:25.445060 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:25.460977 master-0 kubenswrapper[26425]: I0217 15:50:25.458777 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:25.532939 master-0 kubenswrapper[26425]: I0217 15:50:25.532839 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle\") pod \"a5f535cd-248a-48c9-a388-5b574dd3db17\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " Feb 17 15:50:25.532939 master-0 kubenswrapper[26425]: I0217 15:50:25.532909 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jz4\" (UniqueName: \"kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4\") pod \"a5f535cd-248a-48c9-a388-5b574dd3db17\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " Feb 17 15:50:25.533088 master-0 kubenswrapper[26425]: I0217 15:50:25.533059 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs\") pod \"a5f535cd-248a-48c9-a388-5b574dd3db17\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " Feb 17 15:50:25.533144 master-0 kubenswrapper[26425]: I0217 15:50:25.533099 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data\") pod \"a5f535cd-248a-48c9-a388-5b574dd3db17\" (UID: \"a5f535cd-248a-48c9-a388-5b574dd3db17\") " Feb 17 15:50:25.535165 master-0 kubenswrapper[26425]: I0217 15:50:25.535109 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs" (OuterVolumeSpecName: "logs") pod "a5f535cd-248a-48c9-a388-5b574dd3db17" (UID: "a5f535cd-248a-48c9-a388-5b574dd3db17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:50:25.540515 master-0 kubenswrapper[26425]: I0217 15:50:25.540379 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4" (OuterVolumeSpecName: "kube-api-access-b4jz4") pod "a5f535cd-248a-48c9-a388-5b574dd3db17" (UID: "a5f535cd-248a-48c9-a388-5b574dd3db17"). InnerVolumeSpecName "kube-api-access-b4jz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:25.568520 master-0 kubenswrapper[26425]: I0217 15:50:25.567670 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data" (OuterVolumeSpecName: "config-data") pod "a5f535cd-248a-48c9-a388-5b574dd3db17" (UID: "a5f535cd-248a-48c9-a388-5b574dd3db17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:25.591634 master-0 kubenswrapper[26425]: I0217 15:50:25.591566 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5f535cd-248a-48c9-a388-5b574dd3db17" (UID: "a5f535cd-248a-48c9-a388-5b574dd3db17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:25.636366 master-0 kubenswrapper[26425]: I0217 15:50:25.636276 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:25.636366 master-0 kubenswrapper[26425]: I0217 15:50:25.636357 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jz4\" (UniqueName: \"kubernetes.io/projected/a5f535cd-248a-48c9-a388-5b574dd3db17-kube-api-access-b4jz4\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:25.636366 master-0 kubenswrapper[26425]: I0217 15:50:25.636371 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5f535cd-248a-48c9-a388-5b574dd3db17-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:25.636637 master-0 kubenswrapper[26425]: I0217 15:50:25.636383 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f535cd-248a-48c9-a388-5b574dd3db17-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:25.805238 master-0 kubenswrapper[26425]: I0217 15:50:25.804482 26425 generic.go:334] "Generic (PLEG): container finished" podID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerID="71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a" exitCode=0 Feb 17 15:50:25.805238 master-0 kubenswrapper[26425]: I0217 15:50:25.804583 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:25.805238 master-0 kubenswrapper[26425]: I0217 15:50:25.804664 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerDied","Data":"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a"} Feb 17 15:50:25.805238 master-0 kubenswrapper[26425]: I0217 15:50:25.804730 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a5f535cd-248a-48c9-a388-5b574dd3db17","Type":"ContainerDied","Data":"5d492f4cc3efab59b39fcfb8a5ed6879eabcfec019f4e6e10315cb8fac3299fb"} Feb 17 15:50:25.805238 master-0 kubenswrapper[26425]: I0217 15:50:25.804758 26425 scope.go:117] "RemoveContainer" containerID="71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a" Feb 17 15:50:25.845689 master-0 kubenswrapper[26425]: I0217 15:50:25.844498 26425 scope.go:117] "RemoveContainer" containerID="243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420" Feb 17 15:50:25.877167 master-0 kubenswrapper[26425]: I0217 15:50:25.876332 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:25.881086 master-0 kubenswrapper[26425]: I0217 15:50:25.880726 26425 scope.go:117] "RemoveContainer" containerID="71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a" Feb 17 15:50:25.886015 master-0 kubenswrapper[26425]: E0217 15:50:25.883702 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a\": container with ID starting with 71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a not found: ID does not exist" containerID="71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a" Feb 17 15:50:25.886015 master-0 kubenswrapper[26425]: I0217 15:50:25.883762 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a"} err="failed to get container status \"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a\": rpc error: code = NotFound desc = could not find container \"71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a\": container with ID starting with 71f5cba64b5dc5b8eb01bd0ff99623cd15aff105b47536eec803761ac70eec6a not found: ID does not exist" Feb 17 15:50:25.886015 master-0 kubenswrapper[26425]: I0217 15:50:25.883788 26425 scope.go:117] "RemoveContainer" containerID="243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420" Feb 17 15:50:25.889446 master-0 kubenswrapper[26425]: E0217 15:50:25.889201 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420\": container with ID starting with 243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420 not found: ID does not exist" containerID="243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420" Feb 17 15:50:25.889446 master-0 kubenswrapper[26425]: I0217 15:50:25.889260 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420"} err="failed to get container status \"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420\": rpc error: code = NotFound desc = could not find container \"243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420\": container with ID starting with 243d67ced2cc7760a59cde12ce0ae4011e9c50070d9396a2e6ddabdbea483420 not found: ID does not exist" Feb 17 15:50:25.910303 master-0 kubenswrapper[26425]: I0217 15:50:25.908801 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.926963 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: E0217 15:50:25.927571 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-api" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.927587 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-api" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: E0217 15:50:25.927652 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-log" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.927658 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-log" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.927929 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-api" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.927947 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" containerName="nova-api-log" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.933366 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:25.944617 master-0 kubenswrapper[26425]: I0217 15:50:25.944171 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:25.947060 master-0 kubenswrapper[26425]: I0217 15:50:25.946336 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 15:50:25.947060 master-0 kubenswrapper[26425]: I0217 15:50:25.946923 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 15:50:25.947060 master-0 kubenswrapper[26425]: I0217 15:50:25.946988 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 15:50:26.015083 master-0 kubenswrapper[26425]: I0217 15:50:26.015037 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x59m"] Feb 17 15:50:26.019165 master-0 kubenswrapper[26425]: W0217 15:50:26.019132 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6af31fb_9e97_4939_9320_2ed232a3a039.slice/crio-b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07 WatchSource:0}: Error finding container b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07: Status 404 returned error can't find the container with id b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07 Feb 17 15:50:26.048515 master-0 kubenswrapper[26425]: I0217 15:50:26.048445 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.048515 master-0 kubenswrapper[26425]: I0217 15:50:26.048515 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.048683 master-0 kubenswrapper[26425]: I0217 15:50:26.048600 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlzl\" (UniqueName: \"kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.048683 master-0 kubenswrapper[26425]: I0217 15:50:26.048636 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.048758 master-0 kubenswrapper[26425]: I0217 15:50:26.048738 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.048797 master-0 kubenswrapper[26425]: I0217 15:50:26.048788 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.132371 master-0 kubenswrapper[26425]: W0217 15:50:26.132278 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6afe145e_02b2_47d2_9b6c_b828c271aa68.slice/crio-a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a WatchSource:0}: Error finding container a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a: Status 404 returned error can't find the container with id a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a Feb 17 15:50:26.137610 master-0 kubenswrapper[26425]: I0217 15:50:26.137536 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-7vrrr"] Feb 17 15:50:26.151223 master-0 kubenswrapper[26425]: I0217 15:50:26.151134 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.151309 master-0 kubenswrapper[26425]: I0217 15:50:26.151281 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.151492 master-0 kubenswrapper[26425]: I0217 15:50:26.151427 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.151492 master-0 kubenswrapper[26425]: I0217 15:50:26.151478 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.151590 master-0 kubenswrapper[26425]: I0217 15:50:26.151560 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlzl\" (UniqueName: \"kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.151633 master-0 kubenswrapper[26425]: I0217 15:50:26.151593 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.152231 master-0 kubenswrapper[26425]: I0217 15:50:26.152179 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.155759 master-0 kubenswrapper[26425]: I0217 15:50:26.155584 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.156532 master-0 kubenswrapper[26425]: I0217 15:50:26.156447 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.160726 master-0 kubenswrapper[26425]: I0217 15:50:26.160664 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.160900 master-0 kubenswrapper[26425]: I0217 15:50:26.160874 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.175793 master-0 kubenswrapper[26425]: I0217 15:50:26.175737 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlzl\" (UniqueName: \"kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl\") pod \"nova-api-0\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " pod="openstack/nova-api-0" Feb 17 15:50:26.264921 master-0 kubenswrapper[26425]: I0217 15:50:26.264803 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:26.413053 master-0 kubenswrapper[26425]: I0217 15:50:26.412983 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f535cd-248a-48c9-a388-5b574dd3db17" path="/var/lib/kubelet/pods/a5f535cd-248a-48c9-a388-5b574dd3db17/volumes" Feb 17 15:50:26.828999 master-0 kubenswrapper[26425]: I0217 15:50:26.828940 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x59m" event={"ID":"c6af31fb-9e97-4939-9320-2ed232a3a039","Type":"ContainerStarted","Data":"1c78601402238fce171a7e1f66830051a044eb87c43bdb26c3a5847d62615724"} Feb 17 15:50:26.829097 master-0 kubenswrapper[26425]: I0217 15:50:26.829003 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x59m" event={"ID":"c6af31fb-9e97-4939-9320-2ed232a3a039","Type":"ContainerStarted","Data":"b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07"} Feb 17 15:50:26.829641 master-0 kubenswrapper[26425]: I0217 15:50:26.829551 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:26.836944 master-0 kubenswrapper[26425]: I0217 15:50:26.836858 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-7vrrr" event={"ID":"6afe145e-02b2-47d2-9b6c-b828c271aa68","Type":"ContainerStarted","Data":"fb2581f4bbd1b0dec1d9b05d2d73cf6a6f4673b29c16df7a7ea16e4a276ae4f7"} Feb 17 15:50:26.837034 master-0 kubenswrapper[26425]: I0217 15:50:26.836949 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-7vrrr" event={"ID":"6afe145e-02b2-47d2-9b6c-b828c271aa68","Type":"ContainerStarted","Data":"a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a"} Feb 17 15:50:26.860683 master-0 kubenswrapper[26425]: I0217 15:50:26.860604 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5x59m" podStartSLOduration=1.860584313 podStartE2EDuration="1.860584313s" podCreationTimestamp="2026-02-17 15:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:26.854098946 +0000 UTC m=+2088.745822774" watchObservedRunningTime="2026-02-17 15:50:26.860584313 +0000 UTC m=+2088.752308131" Feb 17 15:50:26.895864 master-0 kubenswrapper[26425]: I0217 15:50:26.895765 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-7vrrr" podStartSLOduration=1.895745841 podStartE2EDuration="1.895745841s" podCreationTimestamp="2026-02-17 15:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:26.88035643 +0000 UTC m=+2088.772080268" watchObservedRunningTime="2026-02-17 15:50:26.895745841 +0000 UTC m=+2088.787469669" Feb 17 15:50:27.851295 master-0 kubenswrapper[26425]: I0217 15:50:27.851187 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerStarted","Data":"bb2b9338e04a990b2e96845b19cedd71d05699d6de202b162cb70a12033a1c2d"} Feb 17 15:50:27.851295 master-0 kubenswrapper[26425]: I0217 15:50:27.851273 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerStarted","Data":"7cd1e8345f1c8f26b430523cb3c3d659103ebc42c8723a917cd87a7de7108cbe"} Feb 17 15:50:27.851295 master-0 kubenswrapper[26425]: I0217 15:50:27.851294 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerStarted","Data":"bdb0951acfdd59014c233212127a2d8835e7c561221e2bf74fc8da564875c735"} Feb 17 15:50:27.881942 master-0 kubenswrapper[26425]: I0217 15:50:27.881846 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.881826348 podStartE2EDuration="2.881826348s" podCreationTimestamp="2026-02-17 15:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:27.87688302 +0000 UTC m=+2089.768606848" watchObservedRunningTime="2026-02-17 15:50:27.881826348 +0000 UTC m=+2089.773550166" Feb 17 15:50:28.289771 master-0 kubenswrapper[26425]: I0217 15:50:28.289636 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f95c8447-f78pp" Feb 17 15:50:28.387968 master-0 kubenswrapper[26425]: I0217 15:50:28.387883 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:50:28.388180 master-0 kubenswrapper[26425]: I0217 15:50:28.388158 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="dnsmasq-dns" containerID="cri-o://1cae0dcde166c1253a4174b944e897efb4cfda61044b831b5015242542209a17" gracePeriod=10 Feb 17 15:50:28.864379 master-0 kubenswrapper[26425]: I0217 15:50:28.864329 26425 generic.go:334] "Generic (PLEG): container finished" podID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerID="1cae0dcde166c1253a4174b944e897efb4cfda61044b831b5015242542209a17" exitCode=0 Feb 17 15:50:28.864861 master-0 kubenswrapper[26425]: I0217 15:50:28.864425 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" event={"ID":"b02a0a47-ae20-4062-bd49-80724d6f70fd","Type":"ContainerDied","Data":"1cae0dcde166c1253a4174b944e897efb4cfda61044b831b5015242542209a17"} Feb 17 15:50:29.169683 master-0 kubenswrapper[26425]: I0217 15:50:29.169541 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:50:29.361955 master-0 kubenswrapper[26425]: I0217 15:50:29.361879 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.362176 master-0 kubenswrapper[26425]: I0217 15:50:29.362003 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nv5k\" (UniqueName: \"kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.362218 master-0 kubenswrapper[26425]: I0217 15:50:29.362184 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.362381 master-0 kubenswrapper[26425]: I0217 15:50:29.362350 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.362449 master-0 kubenswrapper[26425]: I0217 15:50:29.362421 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.362653 master-0 kubenswrapper[26425]: I0217 15:50:29.362502 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0\") pod \"b02a0a47-ae20-4062-bd49-80724d6f70fd\" (UID: \"b02a0a47-ae20-4062-bd49-80724d6f70fd\") " Feb 17 15:50:29.369095 master-0 kubenswrapper[26425]: I0217 15:50:29.368993 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k" (OuterVolumeSpecName: "kube-api-access-6nv5k") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "kube-api-access-6nv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:29.430807 master-0 kubenswrapper[26425]: I0217 15:50:29.430656 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config" (OuterVolumeSpecName: "config") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:50:29.431488 master-0 kubenswrapper[26425]: I0217 15:50:29.431392 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:50:29.431891 master-0 kubenswrapper[26425]: I0217 15:50:29.431837 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:50:29.437360 master-0 kubenswrapper[26425]: I0217 15:50:29.437289 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:50:29.441500 master-0 kubenswrapper[26425]: I0217 15:50:29.441376 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b02a0a47-ae20-4062-bd49-80724d6f70fd" (UID: "b02a0a47-ae20-4062-bd49-80724d6f70fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472011 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472070 26425 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472083 26425 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472095 26425 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472106 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nv5k\" (UniqueName: \"kubernetes.io/projected/b02a0a47-ae20-4062-bd49-80724d6f70fd-kube-api-access-6nv5k\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.472094 master-0 kubenswrapper[26425]: I0217 15:50:29.472115 26425 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b02a0a47-ae20-4062-bd49-80724d6f70fd-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:29.879636 master-0 kubenswrapper[26425]: I0217 15:50:29.879567 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" event={"ID":"b02a0a47-ae20-4062-bd49-80724d6f70fd","Type":"ContainerDied","Data":"a838040a28e948990b469a3af8ac3a1a8bdecaef357e1b55f9e07ca1aa70b8db"} Feb 17 15:50:29.879636 master-0 kubenswrapper[26425]: I0217 15:50:29.879630 26425 scope.go:117] "RemoveContainer" containerID="1cae0dcde166c1253a4174b944e897efb4cfda61044b831b5015242542209a17" Feb 17 15:50:29.880244 master-0 kubenswrapper[26425]: I0217 15:50:29.879752 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-bfqg5" Feb 17 15:50:29.884599 master-0 kubenswrapper[26425]: I0217 15:50:29.883865 26425 generic.go:334] "Generic (PLEG): container finished" podID="6afe145e-02b2-47d2-9b6c-b828c271aa68" containerID="fb2581f4bbd1b0dec1d9b05d2d73cf6a6f4673b29c16df7a7ea16e4a276ae4f7" exitCode=0 Feb 17 15:50:29.884599 master-0 kubenswrapper[26425]: I0217 15:50:29.883943 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-7vrrr" event={"ID":"6afe145e-02b2-47d2-9b6c-b828c271aa68","Type":"ContainerDied","Data":"fb2581f4bbd1b0dec1d9b05d2d73cf6a6f4673b29c16df7a7ea16e4a276ae4f7"} Feb 17 15:50:29.920834 master-0 kubenswrapper[26425]: I0217 15:50:29.920775 26425 scope.go:117] "RemoveContainer" containerID="723aef3295e4a703c4c0db3247739a020d285be26b92771a83b705d7fe87188e" Feb 17 15:50:29.964919 master-0 kubenswrapper[26425]: I0217 15:50:29.964847 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:50:29.983593 master-0 kubenswrapper[26425]: I0217 15:50:29.983527 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-bfqg5"] Feb 17 15:50:30.411861 master-0 kubenswrapper[26425]: I0217 15:50:30.411792 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" path="/var/lib/kubelet/pods/b02a0a47-ae20-4062-bd49-80724d6f70fd/volumes" Feb 17 15:50:31.415211 master-0 kubenswrapper[26425]: I0217 15:50:31.415148 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:31.526512 master-0 kubenswrapper[26425]: I0217 15:50:31.526417 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb95k\" (UniqueName: \"kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k\") pod \"6afe145e-02b2-47d2-9b6c-b828c271aa68\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " Feb 17 15:50:31.526870 master-0 kubenswrapper[26425]: I0217 15:50:31.526593 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts\") pod \"6afe145e-02b2-47d2-9b6c-b828c271aa68\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " Feb 17 15:50:31.526870 master-0 kubenswrapper[26425]: I0217 15:50:31.526684 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle\") pod \"6afe145e-02b2-47d2-9b6c-b828c271aa68\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " Feb 17 15:50:31.526963 master-0 kubenswrapper[26425]: I0217 15:50:31.526945 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data\") pod \"6afe145e-02b2-47d2-9b6c-b828c271aa68\" (UID: \"6afe145e-02b2-47d2-9b6c-b828c271aa68\") " Feb 17 15:50:31.529710 master-0 kubenswrapper[26425]: I0217 15:50:31.529656 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k" (OuterVolumeSpecName: "kube-api-access-hb95k") pod "6afe145e-02b2-47d2-9b6c-b828c271aa68" (UID: "6afe145e-02b2-47d2-9b6c-b828c271aa68"). InnerVolumeSpecName "kube-api-access-hb95k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:31.529954 master-0 kubenswrapper[26425]: I0217 15:50:31.529916 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts" (OuterVolumeSpecName: "scripts") pod "6afe145e-02b2-47d2-9b6c-b828c271aa68" (UID: "6afe145e-02b2-47d2-9b6c-b828c271aa68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:31.553832 master-0 kubenswrapper[26425]: I0217 15:50:31.553709 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6afe145e-02b2-47d2-9b6c-b828c271aa68" (UID: "6afe145e-02b2-47d2-9b6c-b828c271aa68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:31.569214 master-0 kubenswrapper[26425]: I0217 15:50:31.568978 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data" (OuterVolumeSpecName: "config-data") pod "6afe145e-02b2-47d2-9b6c-b828c271aa68" (UID: "6afe145e-02b2-47d2-9b6c-b828c271aa68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:31.631023 master-0 kubenswrapper[26425]: I0217 15:50:31.630957 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:31.631424 master-0 kubenswrapper[26425]: I0217 15:50:31.631393 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb95k\" (UniqueName: \"kubernetes.io/projected/6afe145e-02b2-47d2-9b6c-b828c271aa68-kube-api-access-hb95k\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:31.631619 master-0 kubenswrapper[26425]: I0217 15:50:31.631594 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:31.631778 master-0 kubenswrapper[26425]: I0217 15:50:31.631755 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe145e-02b2-47d2-9b6c-b828c271aa68-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:31.909425 master-0 kubenswrapper[26425]: I0217 15:50:31.909340 26425 generic.go:334] "Generic (PLEG): container finished" podID="c6af31fb-9e97-4939-9320-2ed232a3a039" containerID="1c78601402238fce171a7e1f66830051a044eb87c43bdb26c3a5847d62615724" exitCode=0 Feb 17 15:50:31.909738 master-0 kubenswrapper[26425]: I0217 15:50:31.909444 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x59m" event={"ID":"c6af31fb-9e97-4939-9320-2ed232a3a039","Type":"ContainerDied","Data":"1c78601402238fce171a7e1f66830051a044eb87c43bdb26c3a5847d62615724"} Feb 17 15:50:31.911104 master-0 kubenswrapper[26425]: I0217 15:50:31.911056 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-7vrrr" event={"ID":"6afe145e-02b2-47d2-9b6c-b828c271aa68","Type":"ContainerDied","Data":"a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a"} Feb 17 15:50:31.911104 master-0 kubenswrapper[26425]: I0217 15:50:31.911086 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a868bf5f71ea4b71431805802981e003d81cccc764d307b0633ae4b77a1ad40a" Feb 17 15:50:31.911279 master-0 kubenswrapper[26425]: I0217 15:50:31.911135 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-7vrrr" Feb 17 15:50:33.336171 master-0 kubenswrapper[26425]: I0217 15:50:33.335890 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:33.472079 master-0 kubenswrapper[26425]: I0217 15:50:33.471916 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data\") pod \"c6af31fb-9e97-4939-9320-2ed232a3a039\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " Feb 17 15:50:33.472079 master-0 kubenswrapper[26425]: I0217 15:50:33.472064 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq74p\" (UniqueName: \"kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p\") pod \"c6af31fb-9e97-4939-9320-2ed232a3a039\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " Feb 17 15:50:33.472512 master-0 kubenswrapper[26425]: I0217 15:50:33.472203 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle\") pod \"c6af31fb-9e97-4939-9320-2ed232a3a039\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " Feb 17 15:50:33.472512 master-0 kubenswrapper[26425]: I0217 15:50:33.472328 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts\") pod \"c6af31fb-9e97-4939-9320-2ed232a3a039\" (UID: \"c6af31fb-9e97-4939-9320-2ed232a3a039\") " Feb 17 15:50:33.475571 master-0 kubenswrapper[26425]: I0217 15:50:33.475495 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p" (OuterVolumeSpecName: "kube-api-access-nq74p") pod "c6af31fb-9e97-4939-9320-2ed232a3a039" (UID: "c6af31fb-9e97-4939-9320-2ed232a3a039"). InnerVolumeSpecName "kube-api-access-nq74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:33.476183 master-0 kubenswrapper[26425]: I0217 15:50:33.476124 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts" (OuterVolumeSpecName: "scripts") pod "c6af31fb-9e97-4939-9320-2ed232a3a039" (UID: "c6af31fb-9e97-4939-9320-2ed232a3a039"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:33.507776 master-0 kubenswrapper[26425]: I0217 15:50:33.507676 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6af31fb-9e97-4939-9320-2ed232a3a039" (UID: "c6af31fb-9e97-4939-9320-2ed232a3a039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:33.517448 master-0 kubenswrapper[26425]: I0217 15:50:33.517384 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data" (OuterVolumeSpecName: "config-data") pod "c6af31fb-9e97-4939-9320-2ed232a3a039" (UID: "c6af31fb-9e97-4939-9320-2ed232a3a039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:33.576003 master-0 kubenswrapper[26425]: I0217 15:50:33.575916 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq74p\" (UniqueName: \"kubernetes.io/projected/c6af31fb-9e97-4939-9320-2ed232a3a039-kube-api-access-nq74p\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:33.576003 master-0 kubenswrapper[26425]: I0217 15:50:33.575981 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:33.576003 master-0 kubenswrapper[26425]: I0217 15:50:33.576003 26425 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-scripts\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:33.576003 master-0 kubenswrapper[26425]: I0217 15:50:33.576016 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6af31fb-9e97-4939-9320-2ed232a3a039-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:33.937743 master-0 kubenswrapper[26425]: I0217 15:50:33.937683 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x59m" event={"ID":"c6af31fb-9e97-4939-9320-2ed232a3a039","Type":"ContainerDied","Data":"b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07"} Feb 17 15:50:33.937743 master-0 kubenswrapper[26425]: I0217 15:50:33.937738 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10673cba34903b60f23bb9eda3f4d03ec87af72ec735352016e4dc122205a07" Feb 17 15:50:33.938043 master-0 kubenswrapper[26425]: I0217 15:50:33.937877 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x59m" Feb 17 15:50:34.229775 master-0 kubenswrapper[26425]: I0217 15:50:34.229643 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:34.230004 master-0 kubenswrapper[26425]: I0217 15:50:34.229929 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-log" containerID="cri-o://7cd1e8345f1c8f26b430523cb3c3d659103ebc42c8723a917cd87a7de7108cbe" gracePeriod=30 Feb 17 15:50:34.230333 master-0 kubenswrapper[26425]: I0217 15:50:34.230018 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-api" containerID="cri-o://bb2b9338e04a990b2e96845b19cedd71d05699d6de202b162cb70a12033a1c2d" gracePeriod=30 Feb 17 15:50:34.247491 master-0 kubenswrapper[26425]: I0217 15:50:34.246728 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:34.247491 master-0 kubenswrapper[26425]: I0217 15:50:34.247237 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1b3b7ad4-549f-4608-8119-6be98f4eace1" containerName="nova-scheduler-scheduler" containerID="cri-o://74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b" gracePeriod=30 Feb 17 15:50:34.296605 master-0 kubenswrapper[26425]: I0217 15:50:34.296530 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:34.297385 master-0 kubenswrapper[26425]: I0217 15:50:34.297331 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" containerID="cri-o://6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b" gracePeriod=30 Feb 17 15:50:34.298391 master-0 kubenswrapper[26425]: I0217 15:50:34.298319 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" containerID="cri-o://983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385" gracePeriod=30 Feb 17 15:50:34.953485 master-0 kubenswrapper[26425]: I0217 15:50:34.953414 26425 generic.go:334] "Generic (PLEG): container finished" podID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerID="6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b" exitCode=143 Feb 17 15:50:34.954038 master-0 kubenswrapper[26425]: I0217 15:50:34.953580 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerDied","Data":"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b"} Feb 17 15:50:34.958997 master-0 kubenswrapper[26425]: I0217 15:50:34.958947 26425 generic.go:334] "Generic (PLEG): container finished" podID="123d9b9d-4755-4021-afc0-3faa39c76737" containerID="bb2b9338e04a990b2e96845b19cedd71d05699d6de202b162cb70a12033a1c2d" exitCode=0 Feb 17 15:50:34.958997 master-0 kubenswrapper[26425]: I0217 15:50:34.958990 26425 generic.go:334] "Generic (PLEG): container finished" podID="123d9b9d-4755-4021-afc0-3faa39c76737" containerID="7cd1e8345f1c8f26b430523cb3c3d659103ebc42c8723a917cd87a7de7108cbe" exitCode=143 Feb 17 15:50:34.959286 master-0 kubenswrapper[26425]: I0217 15:50:34.959013 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerDied","Data":"bb2b9338e04a990b2e96845b19cedd71d05699d6de202b162cb70a12033a1c2d"} Feb 17 15:50:34.959286 master-0 kubenswrapper[26425]: I0217 15:50:34.959043 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerDied","Data":"7cd1e8345f1c8f26b430523cb3c3d659103ebc42c8723a917cd87a7de7108cbe"} Feb 17 15:50:34.959286 master-0 kubenswrapper[26425]: I0217 15:50:34.959057 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123d9b9d-4755-4021-afc0-3faa39c76737","Type":"ContainerDied","Data":"bdb0951acfdd59014c233212127a2d8835e7c561221e2bf74fc8da564875c735"} Feb 17 15:50:34.959286 master-0 kubenswrapper[26425]: I0217 15:50:34.959069 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb0951acfdd59014c233212127a2d8835e7c561221e2bf74fc8da564875c735" Feb 17 15:50:34.976395 master-0 kubenswrapper[26425]: I0217 15:50:34.976353 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:35.057732 master-0 kubenswrapper[26425]: I0217 15:50:35.057665 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.058375 master-0 kubenswrapper[26425]: I0217 15:50:35.058344 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.058561 master-0 kubenswrapper[26425]: I0217 15:50:35.058539 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.058784 master-0 kubenswrapper[26425]: I0217 15:50:35.058762 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlzl\" (UniqueName: \"kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.058906 master-0 kubenswrapper[26425]: I0217 15:50:35.058886 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.059048 master-0 kubenswrapper[26425]: I0217 15:50:35.059028 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs\") pod \"123d9b9d-4755-4021-afc0-3faa39c76737\" (UID: \"123d9b9d-4755-4021-afc0-3faa39c76737\") " Feb 17 15:50:35.059619 master-0 kubenswrapper[26425]: I0217 15:50:35.059574 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs" (OuterVolumeSpecName: "logs") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:50:35.060610 master-0 kubenswrapper[26425]: I0217 15:50:35.060578 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123d9b9d-4755-4021-afc0-3faa39c76737-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.070504 master-0 kubenswrapper[26425]: I0217 15:50:35.070388 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl" (OuterVolumeSpecName: "kube-api-access-qhlzl") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "kube-api-access-qhlzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:35.098383 master-0 kubenswrapper[26425]: I0217 15:50:35.096094 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data" (OuterVolumeSpecName: "config-data") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.103797 master-0 kubenswrapper[26425]: I0217 15:50:35.102491 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.126682 master-0 kubenswrapper[26425]: I0217 15:50:35.125522 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.132166 master-0 kubenswrapper[26425]: I0217 15:50:35.132114 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "123d9b9d-4755-4021-afc0-3faa39c76737" (UID: "123d9b9d-4755-4021-afc0-3faa39c76737"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.163108 master-0 kubenswrapper[26425]: I0217 15:50:35.163047 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.163108 master-0 kubenswrapper[26425]: I0217 15:50:35.163096 26425 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.163108 master-0 kubenswrapper[26425]: I0217 15:50:35.163112 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.163418 master-0 kubenswrapper[26425]: I0217 15:50:35.163127 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlzl\" (UniqueName: \"kubernetes.io/projected/123d9b9d-4755-4021-afc0-3faa39c76737-kube-api-access-qhlzl\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.163418 master-0 kubenswrapper[26425]: I0217 15:50:35.163141 26425 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123d9b9d-4755-4021-afc0-3faa39c76737-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.739437 master-0 kubenswrapper[26425]: I0217 15:50:35.739369 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:50:35.789245 master-0 kubenswrapper[26425]: I0217 15:50:35.782443 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2zgk\" (UniqueName: \"kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk\") pod \"1b3b7ad4-549f-4608-8119-6be98f4eace1\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " Feb 17 15:50:35.789245 master-0 kubenswrapper[26425]: I0217 15:50:35.785436 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data\") pod \"1b3b7ad4-549f-4608-8119-6be98f4eace1\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " Feb 17 15:50:35.789245 master-0 kubenswrapper[26425]: I0217 15:50:35.785495 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle\") pod \"1b3b7ad4-549f-4608-8119-6be98f4eace1\" (UID: \"1b3b7ad4-549f-4608-8119-6be98f4eace1\") " Feb 17 15:50:35.789245 master-0 kubenswrapper[26425]: I0217 15:50:35.787166 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk" (OuterVolumeSpecName: "kube-api-access-s2zgk") pod "1b3b7ad4-549f-4608-8119-6be98f4eace1" (UID: "1b3b7ad4-549f-4608-8119-6be98f4eace1"). InnerVolumeSpecName "kube-api-access-s2zgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:35.789245 master-0 kubenswrapper[26425]: I0217 15:50:35.787842 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2zgk\" (UniqueName: \"kubernetes.io/projected/1b3b7ad4-549f-4608-8119-6be98f4eace1-kube-api-access-s2zgk\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.827161 master-0 kubenswrapper[26425]: I0217 15:50:35.827100 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data" (OuterVolumeSpecName: "config-data") pod "1b3b7ad4-549f-4608-8119-6be98f4eace1" (UID: "1b3b7ad4-549f-4608-8119-6be98f4eace1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.843221 master-0 kubenswrapper[26425]: I0217 15:50:35.843158 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3b7ad4-549f-4608-8119-6be98f4eace1" (UID: "1b3b7ad4-549f-4608-8119-6be98f4eace1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:35.890222 master-0 kubenswrapper[26425]: I0217 15:50:35.890173 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.890222 master-0 kubenswrapper[26425]: I0217 15:50:35.890210 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3b7ad4-549f-4608-8119-6be98f4eace1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:35.974234 master-0 kubenswrapper[26425]: I0217 15:50:35.974184 26425 generic.go:334] "Generic (PLEG): container finished" podID="1b3b7ad4-549f-4608-8119-6be98f4eace1" containerID="74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b" exitCode=0 Feb 17 15:50:35.974716 master-0 kubenswrapper[26425]: I0217 15:50:35.974286 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:35.974716 master-0 kubenswrapper[26425]: I0217 15:50:35.974284 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3b7ad4-549f-4608-8119-6be98f4eace1","Type":"ContainerDied","Data":"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b"} Feb 17 15:50:35.974716 master-0 kubenswrapper[26425]: I0217 15:50:35.974337 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3b7ad4-549f-4608-8119-6be98f4eace1","Type":"ContainerDied","Data":"c0d73a9e5459b2e185df40bea44b6997ffbb0020458cf2fb4f51c91f7433d00d"} Feb 17 15:50:35.974716 master-0 kubenswrapper[26425]: I0217 15:50:35.974355 26425 scope.go:117] "RemoveContainer" containerID="74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b" Feb 17 15:50:35.974943 master-0 kubenswrapper[26425]: I0217 15:50:35.974927 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:50:36.006890 master-0 kubenswrapper[26425]: I0217 15:50:36.006842 26425 scope.go:117] "RemoveContainer" containerID="74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b" Feb 17 15:50:36.007395 master-0 kubenswrapper[26425]: E0217 15:50:36.007364 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b\": container with ID starting with 74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b not found: ID does not exist" containerID="74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b" Feb 17 15:50:36.007436 master-0 kubenswrapper[26425]: I0217 15:50:36.007400 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b"} err="failed to get container status \"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b\": rpc error: code = NotFound desc = could not find container \"74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b\": container with ID starting with 74d687299704df962ad83ce783f1d173253e649bcaa15dfde90606ca8862c69b not found: ID does not exist" Feb 17 15:50:36.033669 master-0 kubenswrapper[26425]: I0217 15:50:36.033594 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:36.046131 master-0 kubenswrapper[26425]: I0217 15:50:36.045515 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:36.062984 master-0 kubenswrapper[26425]: I0217 15:50:36.062920 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:36.102170 master-0 kubenswrapper[26425]: I0217 15:50:36.102104 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:36.102929 master-0 kubenswrapper[26425]: E0217 15:50:36.102910 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-api" Feb 17 15:50:36.103022 master-0 kubenswrapper[26425]: I0217 15:50:36.103011 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-api" Feb 17 15:50:36.103098 master-0 kubenswrapper[26425]: E0217 15:50:36.103088 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="dnsmasq-dns" Feb 17 15:50:36.103166 master-0 kubenswrapper[26425]: I0217 15:50:36.103156 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="dnsmasq-dns" Feb 17 15:50:36.103235 master-0 kubenswrapper[26425]: E0217 15:50:36.103225 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afe145e-02b2-47d2-9b6c-b828c271aa68" containerName="nova-manage" Feb 17 15:50:36.103292 master-0 kubenswrapper[26425]: I0217 15:50:36.103283 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afe145e-02b2-47d2-9b6c-b828c271aa68" containerName="nova-manage" Feb 17 15:50:36.103366 master-0 kubenswrapper[26425]: E0217 15:50:36.103356 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-log" Feb 17 15:50:36.103424 master-0 kubenswrapper[26425]: I0217 15:50:36.103415 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-log" Feb 17 15:50:36.103505 master-0 kubenswrapper[26425]: E0217 15:50:36.103495 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="init" Feb 17 15:50:36.103568 master-0 kubenswrapper[26425]: I0217 15:50:36.103559 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="init" Feb 17 15:50:36.103636 master-0 kubenswrapper[26425]: E0217 15:50:36.103626 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3b7ad4-549f-4608-8119-6be98f4eace1" containerName="nova-scheduler-scheduler" Feb 17 15:50:36.103689 master-0 kubenswrapper[26425]: I0217 15:50:36.103680 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3b7ad4-549f-4608-8119-6be98f4eace1" containerName="nova-scheduler-scheduler" Feb 17 15:50:36.103775 master-0 kubenswrapper[26425]: E0217 15:50:36.103765 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6af31fb-9e97-4939-9320-2ed232a3a039" containerName="nova-manage" Feb 17 15:50:36.103863 master-0 kubenswrapper[26425]: I0217 15:50:36.103853 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6af31fb-9e97-4939-9320-2ed232a3a039" containerName="nova-manage" Feb 17 15:50:36.104151 master-0 kubenswrapper[26425]: I0217 15:50:36.104138 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-api" Feb 17 15:50:36.104235 master-0 kubenswrapper[26425]: I0217 15:50:36.104225 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b02a0a47-ae20-4062-bd49-80724d6f70fd" containerName="dnsmasq-dns" Feb 17 15:50:36.104337 master-0 kubenswrapper[26425]: I0217 15:50:36.104327 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afe145e-02b2-47d2-9b6c-b828c271aa68" containerName="nova-manage" Feb 17 15:50:36.104423 master-0 kubenswrapper[26425]: I0217 15:50:36.104413 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" containerName="nova-api-log" Feb 17 15:50:36.104509 master-0 kubenswrapper[26425]: I0217 15:50:36.104499 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6af31fb-9e97-4939-9320-2ed232a3a039" containerName="nova-manage" Feb 17 15:50:36.105139 master-0 kubenswrapper[26425]: I0217 15:50:36.105110 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3b7ad4-549f-4608-8119-6be98f4eace1" containerName="nova-scheduler-scheduler" Feb 17 15:50:36.106081 master-0 kubenswrapper[26425]: I0217 15:50:36.106059 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:50:36.107941 master-0 kubenswrapper[26425]: I0217 15:50:36.107915 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 15:50:36.130625 master-0 kubenswrapper[26425]: I0217 15:50:36.130300 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:36.142299 master-0 kubenswrapper[26425]: I0217 15:50:36.142168 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:36.153806 master-0 kubenswrapper[26425]: I0217 15:50:36.153715 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:36.156132 master-0 kubenswrapper[26425]: I0217 15:50:36.156073 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:36.157713 master-0 kubenswrapper[26425]: I0217 15:50:36.157660 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 15:50:36.158430 master-0 kubenswrapper[26425]: I0217 15:50:36.158377 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 15:50:36.159408 master-0 kubenswrapper[26425]: I0217 15:50:36.159197 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 15:50:36.169663 master-0 kubenswrapper[26425]: I0217 15:50:36.169556 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:36.198522 master-0 kubenswrapper[26425]: I0217 15:50:36.198478 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbzf\" (UniqueName: \"kubernetes.io/projected/c2f1644a-484c-4cde-8e40-7849fa1c056d-kube-api-access-9kbzf\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.198522 master-0 kubenswrapper[26425]: I0217 15:50:36.198528 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-logs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.198937 master-0 kubenswrapper[26425]: I0217 15:50:36.198570 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.198937 master-0 kubenswrapper[26425]: I0217 15:50:36.198738 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.198937 master-0 kubenswrapper[26425]: I0217 15:50:36.198773 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-config-data\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.198937 master-0 kubenswrapper[26425]: I0217 15:50:36.198798 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.199102 master-0 kubenswrapper[26425]: I0217 15:50:36.198934 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpmp\" (UniqueName: \"kubernetes.io/projected/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-kube-api-access-kxpmp\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.199273 master-0 kubenswrapper[26425]: I0217 15:50:36.199239 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.199317 master-0 kubenswrapper[26425]: I0217 15:50:36.199280 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.301856 master-0 kubenswrapper[26425]: I0217 15:50:36.301787 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-config-data\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.301856 master-0 kubenswrapper[26425]: I0217 15:50:36.301859 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.302188 master-0 kubenswrapper[26425]: I0217 15:50:36.301887 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.302188 master-0 kubenswrapper[26425]: I0217 15:50:36.302062 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpmp\" (UniqueName: \"kubernetes.io/projected/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-kube-api-access-kxpmp\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.302188 master-0 kubenswrapper[26425]: I0217 15:50:36.302172 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.302334 master-0 kubenswrapper[26425]: I0217 15:50:36.302204 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.302334 master-0 kubenswrapper[26425]: I0217 15:50:36.302285 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbzf\" (UniqueName: \"kubernetes.io/projected/c2f1644a-484c-4cde-8e40-7849fa1c056d-kube-api-access-9kbzf\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.302334 master-0 kubenswrapper[26425]: I0217 15:50:36.302319 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-logs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.302494 master-0 kubenswrapper[26425]: I0217 15:50:36.302362 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.304469 master-0 kubenswrapper[26425]: I0217 15:50:36.304409 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-logs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.305076 master-0 kubenswrapper[26425]: I0217 15:50:36.305026 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.308041 master-0 kubenswrapper[26425]: I0217 15:50:36.307971 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-config-data\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.308041 master-0 kubenswrapper[26425]: I0217 15:50:36.308001 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.308174 master-0 kubenswrapper[26425]: I0217 15:50:36.308123 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.312010 master-0 kubenswrapper[26425]: I0217 15:50:36.311982 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.313607 master-0 kubenswrapper[26425]: I0217 15:50:36.313580 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f1644a-484c-4cde-8e40-7849fa1c056d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.317720 master-0 kubenswrapper[26425]: I0217 15:50:36.317679 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpmp\" (UniqueName: \"kubernetes.io/projected/6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9-kube-api-access-kxpmp\") pod \"nova-api-0\" (UID: \"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9\") " pod="openstack/nova-api-0" Feb 17 15:50:36.317840 master-0 kubenswrapper[26425]: I0217 15:50:36.317692 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbzf\" (UniqueName: \"kubernetes.io/projected/c2f1644a-484c-4cde-8e40-7849fa1c056d-kube-api-access-9kbzf\") pod \"nova-scheduler-0\" (UID: \"c2f1644a-484c-4cde-8e40-7849fa1c056d\") " pod="openstack/nova-scheduler-0" Feb 17 15:50:36.414507 master-0 kubenswrapper[26425]: I0217 15:50:36.414357 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123d9b9d-4755-4021-afc0-3faa39c76737" path="/var/lib/kubelet/pods/123d9b9d-4755-4021-afc0-3faa39c76737/volumes" Feb 17 15:50:36.415553 master-0 kubenswrapper[26425]: I0217 15:50:36.415520 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3b7ad4-549f-4608-8119-6be98f4eace1" path="/var/lib/kubelet/pods/1b3b7ad4-549f-4608-8119-6be98f4eace1/volumes" Feb 17 15:50:36.427790 master-0 kubenswrapper[26425]: I0217 15:50:36.427705 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 15:50:36.476371 master-0 kubenswrapper[26425]: I0217 15:50:36.476244 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 15:50:36.963244 master-0 kubenswrapper[26425]: W0217 15:50:36.963168 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f1644a_484c_4cde_8e40_7849fa1c056d.slice/crio-7eaf9028804c50300c16ff019f3fbed50679dbdde232d7c4637f10255aa2fb23 WatchSource:0}: Error finding container 7eaf9028804c50300c16ff019f3fbed50679dbdde232d7c4637f10255aa2fb23: Status 404 returned error can't find the container with id 7eaf9028804c50300c16ff019f3fbed50679dbdde232d7c4637f10255aa2fb23 Feb 17 15:50:36.970468 master-0 kubenswrapper[26425]: I0217 15:50:36.970379 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 15:50:36.991894 master-0 kubenswrapper[26425]: I0217 15:50:36.991815 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2f1644a-484c-4cde-8e40-7849fa1c056d","Type":"ContainerStarted","Data":"7eaf9028804c50300c16ff019f3fbed50679dbdde232d7c4637f10255aa2fb23"} Feb 17 15:50:37.113369 master-0 kubenswrapper[26425]: W0217 15:50:37.113303 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6a8538_4fa1_4ac1_b94a_631e0bf6e0e9.slice/crio-bd4dcf2bd9d944ece77ca9a0e576e45181004b8adeec4720268028e4882f9117 WatchSource:0}: Error finding container bd4dcf2bd9d944ece77ca9a0e576e45181004b8adeec4720268028e4882f9117: Status 404 returned error can't find the container with id bd4dcf2bd9d944ece77ca9a0e576e45181004b8adeec4720268028e4882f9117 Feb 17 15:50:37.121051 master-0 kubenswrapper[26425]: I0217 15:50:37.120976 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 15:50:37.432506 master-0 kubenswrapper[26425]: I0217 15:50:37.432398 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": read tcp 10.128.0.2:51702->10.128.1.14:8775: read: connection reset by peer" Feb 17 15:50:37.432729 master-0 kubenswrapper[26425]: I0217 15:50:37.432551 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": read tcp 10.128.0.2:51706->10.128.1.14:8775: read: connection reset by peer" Feb 17 15:50:37.562982 master-0 kubenswrapper[26425]: E0217 15:50:37.562903 26425 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2748df_f1f1_44e8_a85d_856492a2af41.slice/crio-983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385.scope\": RecentStats: unable to find data in memory cache]" Feb 17 15:50:37.984733 master-0 kubenswrapper[26425]: I0217 15:50:37.984688 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:50:38.008150 master-0 kubenswrapper[26425]: I0217 15:50:38.008084 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2f1644a-484c-4cde-8e40-7849fa1c056d","Type":"ContainerStarted","Data":"2e2e602bbe1fd0fb86762fd415f8b5d002a065fb6838021281ba8e9e0a3469a4"} Feb 17 15:50:38.015890 master-0 kubenswrapper[26425]: I0217 15:50:38.013134 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9","Type":"ContainerStarted","Data":"527bea3b39b2004ef70e0cedda6fce80259c32fae255266041062c5bb1453fc2"} Feb 17 15:50:38.015890 master-0 kubenswrapper[26425]: I0217 15:50:38.013183 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9","Type":"ContainerStarted","Data":"714a1292d24f558c852830c618b4854d1627469f30011bccf993986d9f5d8968"} Feb 17 15:50:38.015890 master-0 kubenswrapper[26425]: I0217 15:50:38.013213 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9","Type":"ContainerStarted","Data":"bd4dcf2bd9d944ece77ca9a0e576e45181004b8adeec4720268028e4882f9117"} Feb 17 15:50:38.017068 master-0 kubenswrapper[26425]: I0217 15:50:38.016630 26425 generic.go:334] "Generic (PLEG): container finished" podID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerID="983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385" exitCode=0 Feb 17 15:50:38.017068 master-0 kubenswrapper[26425]: I0217 15:50:38.016681 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerDied","Data":"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385"} Feb 17 15:50:38.017068 master-0 kubenswrapper[26425]: I0217 15:50:38.016717 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a2748df-f1f1-44e8-a85d-856492a2af41","Type":"ContainerDied","Data":"06e156219343fc77ff0b98b540fe89c039ad7983c0105e60205913bcfddf6c23"} Feb 17 15:50:38.017068 master-0 kubenswrapper[26425]: I0217 15:50:38.016738 26425 scope.go:117] "RemoveContainer" containerID="983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385" Feb 17 15:50:38.017068 master-0 kubenswrapper[26425]: I0217 15:50:38.016897 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:50:38.068549 master-0 kubenswrapper[26425]: I0217 15:50:38.064368 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.064323726 podStartE2EDuration="2.064323726s" podCreationTimestamp="2026-02-17 15:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:38.046184218 +0000 UTC m=+2099.937908056" watchObservedRunningTime="2026-02-17 15:50:38.064323726 +0000 UTC m=+2099.956047554" Feb 17 15:50:38.079572 master-0 kubenswrapper[26425]: I0217 15:50:38.077569 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx9js\" (UniqueName: \"kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js\") pod \"8a2748df-f1f1-44e8-a85d-856492a2af41\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " Feb 17 15:50:38.079572 master-0 kubenswrapper[26425]: I0217 15:50:38.077676 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs\") pod \"8a2748df-f1f1-44e8-a85d-856492a2af41\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " Feb 17 15:50:38.079572 master-0 kubenswrapper[26425]: I0217 15:50:38.077770 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle\") pod \"8a2748df-f1f1-44e8-a85d-856492a2af41\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " Feb 17 15:50:38.079572 master-0 kubenswrapper[26425]: I0217 15:50:38.077819 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs\") pod \"8a2748df-f1f1-44e8-a85d-856492a2af41\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " Feb 17 15:50:38.079572 master-0 kubenswrapper[26425]: I0217 15:50:38.078031 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data\") pod \"8a2748df-f1f1-44e8-a85d-856492a2af41\" (UID: \"8a2748df-f1f1-44e8-a85d-856492a2af41\") " Feb 17 15:50:38.084881 master-0 kubenswrapper[26425]: I0217 15:50:38.083852 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs" (OuterVolumeSpecName: "logs") pod "8a2748df-f1f1-44e8-a85d-856492a2af41" (UID: "8a2748df-f1f1-44e8-a85d-856492a2af41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:50:38.090143 master-0 kubenswrapper[26425]: I0217 15:50:38.089828 26425 scope.go:117] "RemoveContainer" containerID="6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b" Feb 17 15:50:38.101586 master-0 kubenswrapper[26425]: I0217 15:50:38.100747 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js" (OuterVolumeSpecName: "kube-api-access-rx9js") pod "8a2748df-f1f1-44e8-a85d-856492a2af41" (UID: "8a2748df-f1f1-44e8-a85d-856492a2af41"). InnerVolumeSpecName "kube-api-access-rx9js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:50:38.105502 master-0 kubenswrapper[26425]: I0217 15:50:38.104013 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.103990453 podStartE2EDuration="2.103990453s" podCreationTimestamp="2026-02-17 15:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:38.072453712 +0000 UTC m=+2099.964177530" watchObservedRunningTime="2026-02-17 15:50:38.103990453 +0000 UTC m=+2099.995714271" Feb 17 15:50:38.115640 master-0 kubenswrapper[26425]: I0217 15:50:38.115573 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data" (OuterVolumeSpecName: "config-data") pod "8a2748df-f1f1-44e8-a85d-856492a2af41" (UID: "8a2748df-f1f1-44e8-a85d-856492a2af41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:38.146788 master-0 kubenswrapper[26425]: I0217 15:50:38.146589 26425 scope.go:117] "RemoveContainer" containerID="983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385" Feb 17 15:50:38.147849 master-0 kubenswrapper[26425]: E0217 15:50:38.147102 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385\": container with ID starting with 983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385 not found: ID does not exist" containerID="983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385" Feb 17 15:50:38.147849 master-0 kubenswrapper[26425]: I0217 15:50:38.147168 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385"} err="failed to get container status \"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385\": rpc error: code = NotFound desc = could not find container \"983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385\": container with ID starting with 983cec4d428b3e7d082e5f9e2f5932b93ef0a15cc3fab7755627400edee06385 not found: ID does not exist" Feb 17 15:50:38.147849 master-0 kubenswrapper[26425]: I0217 15:50:38.147203 26425 scope.go:117] "RemoveContainer" containerID="6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b" Feb 17 15:50:38.147849 master-0 kubenswrapper[26425]: E0217 15:50:38.147662 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b\": container with ID starting with 6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b not found: ID does not exist" containerID="6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b" Feb 17 15:50:38.147849 master-0 kubenswrapper[26425]: I0217 15:50:38.147724 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b"} err="failed to get container status \"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b\": rpc error: code = NotFound desc = could not find container \"6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b\": container with ID starting with 6708a81fd8f53db495c63956913c9beceb67e5efba92828c297212ebacaff11b not found: ID does not exist" Feb 17 15:50:38.152859 master-0 kubenswrapper[26425]: I0217 15:50:38.152648 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a2748df-f1f1-44e8-a85d-856492a2af41" (UID: "8a2748df-f1f1-44e8-a85d-856492a2af41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:38.171262 master-0 kubenswrapper[26425]: I0217 15:50:38.171205 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8a2748df-f1f1-44e8-a85d-856492a2af41" (UID: "8a2748df-f1f1-44e8-a85d-856492a2af41"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:50:38.182952 master-0 kubenswrapper[26425]: I0217 15:50:38.182892 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx9js\" (UniqueName: \"kubernetes.io/projected/8a2748df-f1f1-44e8-a85d-856492a2af41-kube-api-access-rx9js\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:38.182952 master-0 kubenswrapper[26425]: I0217 15:50:38.182942 26425 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2748df-f1f1-44e8-a85d-856492a2af41-logs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:38.183184 master-0 kubenswrapper[26425]: I0217 15:50:38.182963 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:38.183184 master-0 kubenswrapper[26425]: I0217 15:50:38.182977 26425 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:38.183184 master-0 kubenswrapper[26425]: I0217 15:50:38.182990 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2748df-f1f1-44e8-a85d-856492a2af41-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 15:50:38.375739 master-0 kubenswrapper[26425]: I0217 15:50:38.375672 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:38.393825 master-0 kubenswrapper[26425]: I0217 15:50:38.393675 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:38.411985 master-0 kubenswrapper[26425]: I0217 15:50:38.411876 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" path="/var/lib/kubelet/pods/8a2748df-f1f1-44e8-a85d-856492a2af41/volumes" Feb 17 15:50:39.683618 master-0 kubenswrapper[26425]: I0217 15:50:39.683530 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: E0217 15:50:39.684058 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: I0217 15:50:39.684078 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: E0217 15:50:39.684125 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: I0217 15:50:39.684132 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: I0217 15:50:39.684392 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-metadata" Feb 17 15:50:39.684701 master-0 kubenswrapper[26425]: I0217 15:50:39.684433 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2748df-f1f1-44e8-a85d-856492a2af41" containerName="nova-metadata-log" Feb 17 15:50:39.685658 master-0 kubenswrapper[26425]: I0217 15:50:39.685629 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:50:39.688418 master-0 kubenswrapper[26425]: I0217 15:50:39.688386 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 15:50:39.688615 master-0 kubenswrapper[26425]: I0217 15:50:39.688567 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 15:50:39.825334 master-0 kubenswrapper[26425]: I0217 15:50:39.825255 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkp2c\" (UniqueName: \"kubernetes.io/projected/52cca086-7d74-42fd-93c7-10e0080722fa-kube-api-access-dkp2c\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.825599 master-0 kubenswrapper[26425]: I0217 15:50:39.825423 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.825599 master-0 kubenswrapper[26425]: I0217 15:50:39.825556 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.825756 master-0 kubenswrapper[26425]: I0217 15:50:39.825712 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cca086-7d74-42fd-93c7-10e0080722fa-logs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.825866 master-0 kubenswrapper[26425]: I0217 15:50:39.825837 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-config-data\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.928254 master-0 kubenswrapper[26425]: I0217 15:50:39.928163 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-config-data\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.928254 master-0 kubenswrapper[26425]: I0217 15:50:39.928246 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkp2c\" (UniqueName: \"kubernetes.io/projected/52cca086-7d74-42fd-93c7-10e0080722fa-kube-api-access-dkp2c\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.928972 master-0 kubenswrapper[26425]: I0217 15:50:39.928349 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.928972 master-0 kubenswrapper[26425]: I0217 15:50:39.928474 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.928972 master-0 kubenswrapper[26425]: I0217 15:50:39.928544 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cca086-7d74-42fd-93c7-10e0080722fa-logs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.929233 master-0 kubenswrapper[26425]: I0217 15:50:39.929185 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52cca086-7d74-42fd-93c7-10e0080722fa-logs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.931933 master-0 kubenswrapper[26425]: I0217 15:50:39.931897 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-config-data\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.932172 master-0 kubenswrapper[26425]: I0217 15:50:39.932120 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:39.932753 master-0 kubenswrapper[26425]: I0217 15:50:39.932685 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52cca086-7d74-42fd-93c7-10e0080722fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:41.432434 master-0 kubenswrapper[26425]: I0217 15:50:41.432339 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 15:50:42.440363 master-0 kubenswrapper[26425]: I0217 15:50:42.439117 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:45.382538 master-0 kubenswrapper[26425]: I0217 15:50:45.382376 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkp2c\" (UniqueName: \"kubernetes.io/projected/52cca086-7d74-42fd-93c7-10e0080722fa-kube-api-access-dkp2c\") pod \"nova-metadata-0\" (UID: \"52cca086-7d74-42fd-93c7-10e0080722fa\") " pod="openstack/nova-metadata-0" Feb 17 15:50:45.404658 master-0 kubenswrapper[26425]: I0217 15:50:45.404575 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 15:50:45.986670 master-0 kubenswrapper[26425]: I0217 15:50:45.986606 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 15:50:46.013371 master-0 kubenswrapper[26425]: W0217 15:50:46.011188 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52cca086_7d74_42fd_93c7_10e0080722fa.slice/crio-4bdcf033636a4d614fba36a3a920117642c80586a8cedf870a0ea98e87eb7d78 WatchSource:0}: Error finding container 4bdcf033636a4d614fba36a3a920117642c80586a8cedf870a0ea98e87eb7d78: Status 404 returned error can't find the container with id 4bdcf033636a4d614fba36a3a920117642c80586a8cedf870a0ea98e87eb7d78 Feb 17 15:50:46.137546 master-0 kubenswrapper[26425]: I0217 15:50:46.137484 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52cca086-7d74-42fd-93c7-10e0080722fa","Type":"ContainerStarted","Data":"4bdcf033636a4d614fba36a3a920117642c80586a8cedf870a0ea98e87eb7d78"} Feb 17 15:50:46.429141 master-0 kubenswrapper[26425]: I0217 15:50:46.428374 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 15:50:46.466670 master-0 kubenswrapper[26425]: I0217 15:50:46.466526 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 15:50:46.477934 master-0 kubenswrapper[26425]: I0217 15:50:46.477866 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:50:46.477934 master-0 kubenswrapper[26425]: I0217 15:50:46.477942 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 15:50:47.152618 master-0 kubenswrapper[26425]: I0217 15:50:47.152544 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52cca086-7d74-42fd-93c7-10e0080722fa","Type":"ContainerStarted","Data":"b7fece5676a8b714c2c64ee3f38c3383624a1adfa0f9bc32a3e7ddd5ca0c0658"} Feb 17 15:50:47.152936 master-0 kubenswrapper[26425]: I0217 15:50:47.152627 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52cca086-7d74-42fd-93c7-10e0080722fa","Type":"ContainerStarted","Data":"0b23db19cdfef7c61de1a386fd1a4d92b55fec9f559e70a14385424119eb7fc0"} Feb 17 15:50:47.196243 master-0 kubenswrapper[26425]: I0217 15:50:47.196164 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.196142447 podStartE2EDuration="9.196142447s" podCreationTimestamp="2026-02-17 15:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:50:47.184908566 +0000 UTC m=+2109.076632424" watchObservedRunningTime="2026-02-17 15:50:47.196142447 +0000 UTC m=+2109.087866265" Feb 17 15:50:47.211200 master-0 kubenswrapper[26425]: I0217 15:50:47.210833 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 15:50:47.497838 master-0 kubenswrapper[26425]: I0217 15:50:47.497671 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:47.498396 master-0 kubenswrapper[26425]: I0217 15:50:47.497694 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c6a8538-4fa1-4ac1-b94a-631e0bf6e0e9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:50.410759 master-0 kubenswrapper[26425]: I0217 15:50:50.410687 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:50:50.410759 master-0 kubenswrapper[26425]: I0217 15:50:50.410755 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 15:50:55.405445 master-0 kubenswrapper[26425]: I0217 15:50:55.405340 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 15:50:55.405445 master-0 kubenswrapper[26425]: I0217 15:50:55.405442 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 15:50:56.422687 master-0 kubenswrapper[26425]: I0217 15:50:56.422608 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="52cca086-7d74-42fd-93c7-10e0080722fa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.24:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:56.423633 master-0 kubenswrapper[26425]: I0217 15:50:56.422630 26425 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="52cca086-7d74-42fd-93c7-10e0080722fa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.24:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:50:56.487818 master-0 kubenswrapper[26425]: I0217 15:50:56.487769 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 15:50:56.488787 master-0 kubenswrapper[26425]: I0217 15:50:56.488713 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 15:50:56.488894 master-0 kubenswrapper[26425]: I0217 15:50:56.488834 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 15:50:56.499066 master-0 kubenswrapper[26425]: I0217 15:50:56.498983 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 15:50:57.351617 master-0 kubenswrapper[26425]: I0217 15:50:57.351385 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 15:50:57.363566 master-0 kubenswrapper[26425]: I0217 15:50:57.363512 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 15:51:05.411156 master-0 kubenswrapper[26425]: I0217 15:51:05.411083 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 15:51:05.413589 master-0 kubenswrapper[26425]: I0217 15:51:05.413535 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 15:51:05.417265 master-0 kubenswrapper[26425]: I0217 15:51:05.417232 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 15:51:05.471867 master-0 kubenswrapper[26425]: I0217 15:51:05.471725 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 15:51:35.005291 master-0 kubenswrapper[26425]: I0217 15:51:35.005154 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:51:35.006059 master-0 kubenswrapper[26425]: I0217 15:51:35.005939 26425 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" podUID="e9f9821d-1712-454c-abbd-e2d26852d4d7" containerName="sushy-emulator" containerID="cri-o://90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731" gracePeriod=30 Feb 17 15:51:35.824682 master-0 kubenswrapper[26425]: I0217 15:51:35.824632 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:51:35.897790 master-0 kubenswrapper[26425]: I0217 15:51:35.897713 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7r4p\" (UniqueName: \"kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p\") pod \"e9f9821d-1712-454c-abbd-e2d26852d4d7\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " Feb 17 15:51:35.898050 master-0 kubenswrapper[26425]: I0217 15:51:35.898021 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config\") pod \"e9f9821d-1712-454c-abbd-e2d26852d4d7\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " Feb 17 15:51:35.898199 master-0 kubenswrapper[26425]: I0217 15:51:35.898159 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config\") pod \"e9f9821d-1712-454c-abbd-e2d26852d4d7\" (UID: \"e9f9821d-1712-454c-abbd-e2d26852d4d7\") " Feb 17 15:51:35.898806 master-0 kubenswrapper[26425]: I0217 15:51:35.898717 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "e9f9821d-1712-454c-abbd-e2d26852d4d7" (UID: "e9f9821d-1712-454c-abbd-e2d26852d4d7"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:51:35.899053 master-0 kubenswrapper[26425]: I0217 15:51:35.899009 26425 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e9f9821d-1712-454c-abbd-e2d26852d4d7-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:51:35.902032 master-0 kubenswrapper[26425]: I0217 15:51:35.901968 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "e9f9821d-1712-454c-abbd-e2d26852d4d7" (UID: "e9f9821d-1712-454c-abbd-e2d26852d4d7"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:51:35.902412 master-0 kubenswrapper[26425]: I0217 15:51:35.902369 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p" (OuterVolumeSpecName: "kube-api-access-s7r4p") pod "e9f9821d-1712-454c-abbd-e2d26852d4d7" (UID: "e9f9821d-1712-454c-abbd-e2d26852d4d7"). InnerVolumeSpecName "kube-api-access-s7r4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:51:35.957747 master-0 kubenswrapper[26425]: I0217 15:51:35.956858 26425 generic.go:334] "Generic (PLEG): container finished" podID="e9f9821d-1712-454c-abbd-e2d26852d4d7" containerID="90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731" exitCode=0 Feb 17 15:51:35.957747 master-0 kubenswrapper[26425]: I0217 15:51:35.956938 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" event={"ID":"e9f9821d-1712-454c-abbd-e2d26852d4d7","Type":"ContainerDied","Data":"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731"} Feb 17 15:51:35.957747 master-0 kubenswrapper[26425]: I0217 15:51:35.956941 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" Feb 17 15:51:35.957747 master-0 kubenswrapper[26425]: I0217 15:51:35.956993 26425 scope.go:117] "RemoveContainer" containerID="90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731" Feb 17 15:51:35.957747 master-0 kubenswrapper[26425]: I0217 15:51:35.956977 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-jd8tg" event={"ID":"e9f9821d-1712-454c-abbd-e2d26852d4d7","Type":"ContainerDied","Data":"d0af90dad1326f45880eb7ed324726a7a82bffd3a55af0a137b1fd77ed8eb03e"} Feb 17 15:51:36.010503 master-0 kubenswrapper[26425]: I0217 15:51:36.008944 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7r4p\" (UniqueName: \"kubernetes.io/projected/e9f9821d-1712-454c-abbd-e2d26852d4d7-kube-api-access-s7r4p\") on node \"master-0\" DevicePath \"\"" Feb 17 15:51:36.010503 master-0 kubenswrapper[26425]: I0217 15:51:36.009074 26425 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e9f9821d-1712-454c-abbd-e2d26852d4d7-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 17 15:51:36.042754 master-0 kubenswrapper[26425]: I0217 15:51:36.042673 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-5kt65"] Feb 17 15:51:36.044059 master-0 kubenswrapper[26425]: E0217 15:51:36.044020 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f9821d-1712-454c-abbd-e2d26852d4d7" containerName="sushy-emulator" Feb 17 15:51:36.044059 master-0 kubenswrapper[26425]: I0217 15:51:36.044049 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f9821d-1712-454c-abbd-e2d26852d4d7" containerName="sushy-emulator" Feb 17 15:51:36.045289 master-0 kubenswrapper[26425]: I0217 15:51:36.044482 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f9821d-1712-454c-abbd-e2d26852d4d7" containerName="sushy-emulator" Feb 17 15:51:36.046129 master-0 kubenswrapper[26425]: I0217 15:51:36.046087 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.057587 master-0 kubenswrapper[26425]: I0217 15:51:36.052187 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 17 15:51:36.057587 master-0 kubenswrapper[26425]: I0217 15:51:36.057367 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-5kt65"] Feb 17 15:51:36.087880 master-0 kubenswrapper[26425]: I0217 15:51:36.087439 26425 scope.go:117] "RemoveContainer" containerID="90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731" Feb 17 15:51:36.092111 master-0 kubenswrapper[26425]: E0217 15:51:36.091662 26425 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731\": container with ID starting with 90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731 not found: ID does not exist" containerID="90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731" Feb 17 15:51:36.092111 master-0 kubenswrapper[26425]: I0217 15:51:36.091726 26425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731"} err="failed to get container status \"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731\": rpc error: code = NotFound desc = could not find container \"90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731\": container with ID starting with 90bd3f65f3963012b9dadeaeb488228a262052d0d5d794ac0dc445ed94569731 not found: ID does not exist" Feb 17 15:51:36.099282 master-0 kubenswrapper[26425]: I0217 15:51:36.099232 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:51:36.111267 master-0 kubenswrapper[26425]: I0217 15:51:36.111209 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-jd8tg"] Feb 17 15:51:36.111477 master-0 kubenswrapper[26425]: I0217 15:51:36.111390 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/05c3fd7f-2e53-4565-a14f-94d11a47f445-os-client-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.111520 master-0 kubenswrapper[26425]: I0217 15:51:36.111472 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/05c3fd7f-2e53-4565-a14f-94d11a47f445-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.111599 master-0 kubenswrapper[26425]: I0217 15:51:36.111570 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlxf\" (UniqueName: \"kubernetes.io/projected/05c3fd7f-2e53-4565-a14f-94d11a47f445-kube-api-access-8wlxf\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.214646 master-0 kubenswrapper[26425]: I0217 15:51:36.214477 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/05c3fd7f-2e53-4565-a14f-94d11a47f445-os-client-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.214646 master-0 kubenswrapper[26425]: I0217 15:51:36.214574 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/05c3fd7f-2e53-4565-a14f-94d11a47f445-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.214917 master-0 kubenswrapper[26425]: I0217 15:51:36.214662 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlxf\" (UniqueName: \"kubernetes.io/projected/05c3fd7f-2e53-4565-a14f-94d11a47f445-kube-api-access-8wlxf\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.217256 master-0 kubenswrapper[26425]: I0217 15:51:36.217159 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/05c3fd7f-2e53-4565-a14f-94d11a47f445-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.219883 master-0 kubenswrapper[26425]: I0217 15:51:36.219838 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/05c3fd7f-2e53-4565-a14f-94d11a47f445-os-client-config\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.237051 master-0 kubenswrapper[26425]: I0217 15:51:36.236969 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlxf\" (UniqueName: \"kubernetes.io/projected/05c3fd7f-2e53-4565-a14f-94d11a47f445-kube-api-access-8wlxf\") pod \"sushy-emulator-64488c485f-5kt65\" (UID: \"05c3fd7f-2e53-4565-a14f-94d11a47f445\") " pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.384534 master-0 kubenswrapper[26425]: I0217 15:51:36.384440 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:36.413600 master-0 kubenswrapper[26425]: I0217 15:51:36.413523 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f9821d-1712-454c-abbd-e2d26852d4d7" path="/var/lib/kubelet/pods/e9f9821d-1712-454c-abbd-e2d26852d4d7/volumes" Feb 17 15:51:37.034472 master-0 kubenswrapper[26425]: I0217 15:51:37.034392 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-5kt65"] Feb 17 15:51:37.992781 master-0 kubenswrapper[26425]: I0217 15:51:37.992658 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" event={"ID":"05c3fd7f-2e53-4565-a14f-94d11a47f445","Type":"ContainerStarted","Data":"b2abf18ecd9cebf036267dc9239170bd6dba5e4dd7304441e96a8a8b88034847"} Feb 17 15:51:37.992781 master-0 kubenswrapper[26425]: I0217 15:51:37.992750 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" event={"ID":"05c3fd7f-2e53-4565-a14f-94d11a47f445","Type":"ContainerStarted","Data":"dc1ff22d1775e9114106d111b70eb99483ba44a8e0ce7242012ba5813a039eb8"} Feb 17 15:51:38.030916 master-0 kubenswrapper[26425]: I0217 15:51:38.030796 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" podStartSLOduration=3.030777138 podStartE2EDuration="3.030777138s" podCreationTimestamp="2026-02-17 15:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:51:38.012401335 +0000 UTC m=+2159.904125223" watchObservedRunningTime="2026-02-17 15:51:38.030777138 +0000 UTC m=+2159.922500956" Feb 17 15:51:46.385174 master-0 kubenswrapper[26425]: I0217 15:51:46.385073 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:46.385962 master-0 kubenswrapper[26425]: I0217 15:51:46.385662 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:46.417312 master-0 kubenswrapper[26425]: I0217 15:51:46.417158 26425 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:51:47.136723 master-0 kubenswrapper[26425]: I0217 15:51:47.136654 26425 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-64488c485f-5kt65" Feb 17 15:52:27.308580 master-0 kubenswrapper[26425]: I0217 15:52:27.308347 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:52:27.309644 master-0 kubenswrapper[26425]: E0217 15:52:27.308675 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:52:27.309644 master-0 kubenswrapper[26425]: E0217 15:52:27.308732 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:52:27.309644 master-0 kubenswrapper[26425]: E0217 15:52:27.308827 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:54:29.308798138 +0000 UTC m=+2331.200521986 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:52:51.860838 master-0 kubenswrapper[26425]: I0217 15:52:51.860784 26425 scope.go:117] "RemoveContainer" containerID="8f6277f9c9e0d8841862f39aa72e8def1d54619c9594f69e93e2ca109209a7be" Feb 17 15:52:51.894064 master-0 kubenswrapper[26425]: I0217 15:52:51.893929 26425 scope.go:117] "RemoveContainer" containerID="d8d0159b815fde84844174c09c64d94ce8a2c3698f6f648047642cb1875c9cf6" Feb 17 15:52:51.962677 master-0 kubenswrapper[26425]: I0217 15:52:51.962608 26425 scope.go:117] "RemoveContainer" containerID="248b53655b8165ff426335a7dbd83c2d2e53c153789a0b0a2017cae3709af50a" Feb 17 15:52:52.021779 master-0 kubenswrapper[26425]: I0217 15:52:52.021727 26425 scope.go:117] "RemoveContainer" containerID="3c1068a5d4af9b8119b312ffb45920f47a7119c267cb19cbcc60b8594e5e290e" Feb 17 15:52:52.076332 master-0 kubenswrapper[26425]: I0217 15:52:52.076269 26425 scope.go:117] "RemoveContainer" containerID="1864e8a47379b369d8a66077175769f37b5a488774750f04959a1eeab4ee3e75" Feb 17 15:52:52.129182 master-0 kubenswrapper[26425]: I0217 15:52:52.129132 26425 scope.go:117] "RemoveContainer" containerID="0782b30e859b5fc0407cb775aa4db0fa1dc3026be61690452f75aad0ea7e56c4" Feb 17 15:53:52.265394 master-0 kubenswrapper[26425]: I0217 15:53:52.265315 26425 scope.go:117] "RemoveContainer" containerID="334d8a3f6f5614d74f75a560dbb15127a82e2ef6636e88347a93350957668438" Feb 17 15:53:52.301452 master-0 kubenswrapper[26425]: I0217 15:53:52.301386 26425 scope.go:117] "RemoveContainer" containerID="17a3cfde13e3ca4b8b118165c8321317a9b2a82de7a1181514529f9adf0bd483" Feb 17 15:53:52.360359 master-0 kubenswrapper[26425]: I0217 15:53:52.360253 26425 scope.go:117] "RemoveContainer" containerID="08570b0bb72d0354c2b7168c6cfb3cce5c6760ce82ca47228797273c2f9275a6" Feb 17 15:54:29.332093 master-0 kubenswrapper[26425]: I0217 15:54:29.331989 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:54:29.333115 master-0 kubenswrapper[26425]: E0217 15:54:29.332476 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:54:29.333115 master-0 kubenswrapper[26425]: E0217 15:54:29.332516 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:54:29.333115 master-0 kubenswrapper[26425]: E0217 15:54:29.332595 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.332569219 +0000 UTC m=+2453.224293047 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:56:24.253876 master-0 kubenswrapper[26425]: I0217 15:56:24.253739 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qfrvt"] Feb 17 15:56:24.333128 master-0 kubenswrapper[26425]: I0217 15:56:24.332954 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-737e-account-create-update-z4wjt"] Feb 17 15:56:24.346730 master-0 kubenswrapper[26425]: I0217 15:56:24.346664 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-094f-account-create-update-9dg59"] Feb 17 15:56:24.366554 master-0 kubenswrapper[26425]: I0217 15:56:24.366444 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-trh26"] Feb 17 15:56:24.387707 master-0 kubenswrapper[26425]: I0217 15:56:24.387606 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kjk8x"] Feb 17 15:56:24.409524 master-0 kubenswrapper[26425]: I0217 15:56:24.409093 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qfrvt"] Feb 17 15:56:24.410560 master-0 kubenswrapper[26425]: I0217 15:56:24.410518 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-trh26"] Feb 17 15:56:24.427639 master-0 kubenswrapper[26425]: I0217 15:56:24.427571 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-094f-account-create-update-9dg59"] Feb 17 15:56:24.438747 master-0 kubenswrapper[26425]: I0217 15:56:24.438657 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-737e-account-create-update-z4wjt"] Feb 17 15:56:24.448496 master-0 kubenswrapper[26425]: I0217 15:56:24.448429 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kjk8x"] Feb 17 15:56:25.154250 master-0 kubenswrapper[26425]: I0217 15:56:25.154165 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4c91-account-create-update-b2plp"] Feb 17 15:56:25.173848 master-0 kubenswrapper[26425]: I0217 15:56:25.173781 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4c91-account-create-update-b2plp"] Feb 17 15:56:26.412353 master-0 kubenswrapper[26425]: I0217 15:56:26.412281 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386356e6-e395-4f3d-a52e-2228263bdc65" path="/var/lib/kubelet/pods/386356e6-e395-4f3d-a52e-2228263bdc65/volumes" Feb 17 15:56:26.413124 master-0 kubenswrapper[26425]: I0217 15:56:26.413088 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a905daa-8d29-41e8-a6ce-64b0f1b1b249" path="/var/lib/kubelet/pods/5a905daa-8d29-41e8-a6ce-64b0f1b1b249/volumes" Feb 17 15:56:26.413858 master-0 kubenswrapper[26425]: I0217 15:56:26.413823 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c96a413-ef0a-47d1-86cd-e3f1caec1368" path="/var/lib/kubelet/pods/5c96a413-ef0a-47d1-86cd-e3f1caec1368/volumes" Feb 17 15:56:26.414789 master-0 kubenswrapper[26425]: I0217 15:56:26.414755 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c6fa13-5c49-4e83-9492-208c6cd1fb61" path="/var/lib/kubelet/pods/a8c6fa13-5c49-4e83-9492-208c6cd1fb61/volumes" Feb 17 15:56:26.416201 master-0 kubenswrapper[26425]: I0217 15:56:26.416165 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaf2a40-bdbe-47b5-9b1f-42582d1301a2" path="/var/lib/kubelet/pods/cdaf2a40-bdbe-47b5-9b1f-42582d1301a2/volumes" Feb 17 15:56:26.416982 master-0 kubenswrapper[26425]: I0217 15:56:26.416947 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08335f3-bb90-4f16-baa9-55622ccb587e" path="/var/lib/kubelet/pods/d08335f3-bb90-4f16-baa9-55622ccb587e/volumes" Feb 17 15:56:31.432387 master-0 kubenswrapper[26425]: I0217 15:56:31.432082 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:56:31.432387 master-0 kubenswrapper[26425]: E0217 15:56:31.432266 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:56:31.432387 master-0 kubenswrapper[26425]: E0217 15:56:31.432329 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:56:31.432387 master-0 kubenswrapper[26425]: E0217 15:56:31.432396 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 15:58:33.432374567 +0000 UTC m=+2575.324098395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:56:43.451767 master-0 kubenswrapper[26425]: I0217 15:56:43.451698 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tdkt8"] Feb 17 15:56:43.532715 master-0 kubenswrapper[26425]: I0217 15:56:43.532600 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tdkt8"] Feb 17 15:56:44.416873 master-0 kubenswrapper[26425]: I0217 15:56:44.416817 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1286c3-3a70-4281-bbae-80511edc3742" path="/var/lib/kubelet/pods/3d1286c3-3a70-4281-bbae-80511edc3742/volumes" Feb 17 15:56:51.068764 master-0 kubenswrapper[26425]: I0217 15:56:51.068640 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-88f2d"] Feb 17 15:56:51.082791 master-0 kubenswrapper[26425]: I0217 15:56:51.082651 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-88f2d"] Feb 17 15:56:52.412640 master-0 kubenswrapper[26425]: I0217 15:56:52.412543 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d86a11-7897-4196-93bb-916b7472a6e0" path="/var/lib/kubelet/pods/b8d86a11-7897-4196-93bb-916b7472a6e0/volumes" Feb 17 15:56:52.541569 master-0 kubenswrapper[26425]: I0217 15:56:52.541084 26425 scope.go:117] "RemoveContainer" containerID="77ea48c09a5380a7b33b4112f09f82efc83f1381286b3d5bdc0551461d8c76a4" Feb 17 15:56:52.600822 master-0 kubenswrapper[26425]: I0217 15:56:52.600696 26425 scope.go:117] "RemoveContainer" containerID="cc3105b0256c4fba6875d29ef73613e744867a8acbff04d06d1deddfc3802b54" Feb 17 15:56:52.649730 master-0 kubenswrapper[26425]: I0217 15:56:52.649668 26425 scope.go:117] "RemoveContainer" containerID="8ed57e22fab684de8deffd37a6e9489178c8f904ff19835d9ade5f01899276e9" Feb 17 15:56:52.706188 master-0 kubenswrapper[26425]: I0217 15:56:52.706130 26425 scope.go:117] "RemoveContainer" containerID="6d64391aa8ed49e80aecefcd3ccfa52ae7fa012ae77bda15e9cd74ebbc44fe74" Feb 17 15:56:52.768847 master-0 kubenswrapper[26425]: I0217 15:56:52.768801 26425 scope.go:117] "RemoveContainer" containerID="6cab4fdac51574937282da3ae98732e403df6a62c674c94f128a8f3581681ee1" Feb 17 15:56:52.801130 master-0 kubenswrapper[26425]: I0217 15:56:52.801041 26425 scope.go:117] "RemoveContainer" containerID="bb2b9338e04a990b2e96845b19cedd71d05699d6de202b162cb70a12033a1c2d" Feb 17 15:56:52.860128 master-0 kubenswrapper[26425]: I0217 15:56:52.859992 26425 scope.go:117] "RemoveContainer" containerID="7cd1e8345f1c8f26b430523cb3c3d659103ebc42c8723a917cd87a7de7108cbe" Feb 17 15:56:52.894942 master-0 kubenswrapper[26425]: I0217 15:56:52.894895 26425 scope.go:117] "RemoveContainer" containerID="794618c6172b3a35078743fa3aa977e50d16860b106a0b47f63fa9f15f882539" Feb 17 15:56:52.943837 master-0 kubenswrapper[26425]: I0217 15:56:52.943786 26425 scope.go:117] "RemoveContainer" containerID="7cb2ed632a92a11678708ebbbb548ed18cc865a4d4414aee575f015b4ac3728e" Feb 17 15:56:52.972696 master-0 kubenswrapper[26425]: I0217 15:56:52.972629 26425 scope.go:117] "RemoveContainer" containerID="1360b35c0b6b274bd2b7765c4a556c074a5e426f8981694a24d881b422d32819" Feb 17 15:56:53.076686 master-0 kubenswrapper[26425]: I0217 15:56:53.076636 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5fmzp"] Feb 17 15:56:53.090063 master-0 kubenswrapper[26425]: I0217 15:56:53.090025 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5fmzp"] Feb 17 15:56:54.421746 master-0 kubenswrapper[26425]: I0217 15:56:54.421681 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd12849e-ca95-4cf1-9374-46d1c8d4874b" path="/var/lib/kubelet/pods/cd12849e-ca95-4cf1-9374-46d1c8d4874b/volumes" Feb 17 15:56:57.045734 master-0 kubenswrapper[26425]: I0217 15:56:57.045646 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-g9g6p"] Feb 17 15:56:57.057388 master-0 kubenswrapper[26425]: I0217 15:56:57.057295 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-406d-account-create-update-qv9dz"] Feb 17 15:56:57.068223 master-0 kubenswrapper[26425]: I0217 15:56:57.068054 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-g9g6p"] Feb 17 15:56:57.078392 master-0 kubenswrapper[26425]: I0217 15:56:57.078312 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-406d-account-create-update-qv9dz"] Feb 17 15:56:57.089033 master-0 kubenswrapper[26425]: I0217 15:56:57.088958 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-be98-account-create-update-ccwpm"] Feb 17 15:56:57.100698 master-0 kubenswrapper[26425]: I0217 15:56:57.100632 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-be98-account-create-update-ccwpm"] Feb 17 15:56:58.416366 master-0 kubenswrapper[26425]: I0217 15:56:58.416302 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="733a610a-8f50-42dd-b159-6fd6a8959971" path="/var/lib/kubelet/pods/733a610a-8f50-42dd-b159-6fd6a8959971/volumes" Feb 17 15:56:58.417517 master-0 kubenswrapper[26425]: I0217 15:56:58.417482 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa34f651-bd45-4add-b97d-8e5194d3edf0" path="/var/lib/kubelet/pods/aa34f651-bd45-4add-b97d-8e5194d3edf0/volumes" Feb 17 15:56:58.418265 master-0 kubenswrapper[26425]: I0217 15:56:58.418220 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1197573-c94d-4d3c-9cd1-01b65ef0ec42" path="/var/lib/kubelet/pods/e1197573-c94d-4d3c-9cd1-01b65ef0ec42/volumes" Feb 17 15:57:04.056955 master-0 kubenswrapper[26425]: I0217 15:57:04.056858 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dqtpw"] Feb 17 15:57:04.071402 master-0 kubenswrapper[26425]: I0217 15:57:04.071325 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dqtpw"] Feb 17 15:57:04.408874 master-0 kubenswrapper[26425]: I0217 15:57:04.408805 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d5e735-50f8-40f8-b410-7bf5d95fadc4" path="/var/lib/kubelet/pods/a5d5e735-50f8-40f8-b410-7bf5d95fadc4/volumes" Feb 17 15:57:10.151883 master-0 kubenswrapper[26425]: I0217 15:57:10.151804 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-hgvqn"] Feb 17 15:57:10.175764 master-0 kubenswrapper[26425]: I0217 15:57:10.175690 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-hgvqn"] Feb 17 15:57:10.411664 master-0 kubenswrapper[26425]: I0217 15:57:10.410744 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad487fea-08d0-4fe4-98bc-39c6634cae41" path="/var/lib/kubelet/pods/ad487fea-08d0-4fe4-98bc-39c6634cae41/volumes" Feb 17 15:57:11.038875 master-0 kubenswrapper[26425]: I0217 15:57:11.038775 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-874a-account-create-update-lhwlv"] Feb 17 15:57:11.054955 master-0 kubenswrapper[26425]: I0217 15:57:11.054856 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-874a-account-create-update-lhwlv"] Feb 17 15:57:12.435147 master-0 kubenswrapper[26425]: I0217 15:57:12.435077 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20a01dc-3034-43a8-ad78-2c3b1497c20a" path="/var/lib/kubelet/pods/a20a01dc-3034-43a8-ad78-2c3b1497c20a/volumes" Feb 17 15:57:25.053853 master-0 kubenswrapper[26425]: I0217 15:57:25.053765 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tgjmt"] Feb 17 15:57:25.073610 master-0 kubenswrapper[26425]: I0217 15:57:25.073524 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tgjmt"] Feb 17 15:57:26.414501 master-0 kubenswrapper[26425]: I0217 15:57:26.414403 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c" path="/var/lib/kubelet/pods/6fb53c6c-5e5c-4cf6-9b07-c8036083fd2c/volumes" Feb 17 15:57:35.058453 master-0 kubenswrapper[26425]: I0217 15:57:35.058328 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lc5mm"] Feb 17 15:57:35.087672 master-0 kubenswrapper[26425]: I0217 15:57:35.087601 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lc5mm"] Feb 17 15:57:36.418094 master-0 kubenswrapper[26425]: I0217 15:57:36.417997 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8" path="/var/lib/kubelet/pods/d41ba4c9-1c82-4a6d-8593-1c6abfdd98e8/volumes" Feb 17 15:57:37.189649 master-0 kubenswrapper[26425]: I0217 15:57:37.189558 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ef3-db-sync-smx72"] Feb 17 15:57:37.206123 master-0 kubenswrapper[26425]: I0217 15:57:37.206040 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ef3-db-sync-smx72"] Feb 17 15:57:38.415721 master-0 kubenswrapper[26425]: I0217 15:57:38.415602 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cdc0bf-17bd-4554-811c-89cf8bc1a52c" path="/var/lib/kubelet/pods/92cdc0bf-17bd-4554-811c-89cf8bc1a52c/volumes" Feb 17 15:57:45.043964 master-0 kubenswrapper[26425]: I0217 15:57:45.043765 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kr2xk"] Feb 17 15:57:45.057146 master-0 kubenswrapper[26425]: I0217 15:57:45.057054 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kr2xk"] Feb 17 15:57:46.427129 master-0 kubenswrapper[26425]: I0217 15:57:46.427042 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2e8e8e-7b87-4127-b977-62f0c1f29717" path="/var/lib/kubelet/pods/0f2e8e8e-7b87-4127-b977-62f0c1f29717/volumes" Feb 17 15:57:53.209535 master-0 kubenswrapper[26425]: I0217 15:57:53.209352 26425 scope.go:117] "RemoveContainer" containerID="fd494964f17c0ce9f11b48b19939bd72bf4d96393dd2f5fec9ef7a6dec8aa69f" Feb 17 15:57:53.266568 master-0 kubenswrapper[26425]: I0217 15:57:53.266510 26425 scope.go:117] "RemoveContainer" containerID="d4a85a17d489bdc50a243e8f0ad1ea3a47c418d848059fa77e76575878a0991f" Feb 17 15:57:53.306340 master-0 kubenswrapper[26425]: I0217 15:57:53.306263 26425 scope.go:117] "RemoveContainer" containerID="01d846df3825403741424ac8d5b758b4d902dac12f4306f3c9e14b5b2d1cb982" Feb 17 15:57:53.358109 master-0 kubenswrapper[26425]: I0217 15:57:53.357769 26425 scope.go:117] "RemoveContainer" containerID="31f2a6139bed35247d0a6a1a5a552b455cd60d3a87b66f4b614590716ba8f863" Feb 17 15:57:53.413098 master-0 kubenswrapper[26425]: I0217 15:57:53.413038 26425 scope.go:117] "RemoveContainer" containerID="ae1e2b0f2885ad083bd79296b2d3432535b874e28905ecce3a8eef38ecc1ddfa" Feb 17 15:57:53.469352 master-0 kubenswrapper[26425]: I0217 15:57:53.469287 26425 scope.go:117] "RemoveContainer" containerID="bf032fea4616276011ffea11f209a24cc83f37fe4050b2355b7b86308ef6a20a" Feb 17 15:57:53.531782 master-0 kubenswrapper[26425]: I0217 15:57:53.531716 26425 scope.go:117] "RemoveContainer" containerID="620c3945d3633a038fb50fa5312a4e140308e131791efb0608f980fba0b6aaf8" Feb 17 15:57:53.572267 master-0 kubenswrapper[26425]: I0217 15:57:53.572173 26425 scope.go:117] "RemoveContainer" containerID="c18bde93643a192a077a7501dcab7eb4d7e938b2b97de7ab6e3d53fe8f9d7add" Feb 17 15:57:53.602888 master-0 kubenswrapper[26425]: I0217 15:57:53.602823 26425 scope.go:117] "RemoveContainer" containerID="f43308a817f8761f5f0118d50e70bd080cb1118c64446507e6a98ff0d7fe6314" Feb 17 15:57:53.639120 master-0 kubenswrapper[26425]: I0217 15:57:53.639053 26425 scope.go:117] "RemoveContainer" containerID="1171e5fc788282f957d4efe533b462c16525a08e6b73dd4360a7fc8e3081d216" Feb 17 15:57:53.664014 master-0 kubenswrapper[26425]: I0217 15:57:53.663917 26425 scope.go:117] "RemoveContainer" containerID="a15a54697c7f6c47d74e54fb83f72ae7373426b68f32d427761340ab4a7267a5" Feb 17 15:57:54.069484 master-0 kubenswrapper[26425]: I0217 15:57:54.068698 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-8zl8z"] Feb 17 15:57:54.097276 master-0 kubenswrapper[26425]: I0217 15:57:54.097196 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-8zl8z"] Feb 17 15:57:54.427729 master-0 kubenswrapper[26425]: I0217 15:57:54.425417 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f5e945-543a-4858-b5f8-7e33a1a22459" path="/var/lib/kubelet/pods/87f5e945-543a-4858-b5f8-7e33a1a22459/volumes" Feb 17 15:58:02.047384 master-0 kubenswrapper[26425]: I0217 15:58:02.047294 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-vmh7f"] Feb 17 15:58:02.057525 master-0 kubenswrapper[26425]: I0217 15:58:02.057444 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-vmh7f"] Feb 17 15:58:02.412629 master-0 kubenswrapper[26425]: I0217 15:58:02.412561 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e" path="/var/lib/kubelet/pods/80cbeb6f-bbb6-4c06-b0f9-fab5b89e9c4e/volumes" Feb 17 15:58:03.165236 master-0 kubenswrapper[26425]: I0217 15:58:03.165135 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-016b-account-create-update-v8zdc"] Feb 17 15:58:03.197231 master-0 kubenswrapper[26425]: I0217 15:58:03.197160 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-016b-account-create-update-v8zdc"] Feb 17 15:58:04.429652 master-0 kubenswrapper[26425]: I0217 15:58:04.429599 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca551755-0560-44aa-b5f9-3e9bfc9984af" path="/var/lib/kubelet/pods/ca551755-0560-44aa-b5f9-3e9bfc9984af/volumes" Feb 17 15:58:33.443962 master-0 kubenswrapper[26425]: I0217 15:58:33.443864 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 15:58:33.444689 master-0 kubenswrapper[26425]: E0217 15:58:33.444077 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:58:33.444689 master-0 kubenswrapper[26425]: E0217 15:58:33.444118 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:58:33.444689 master-0 kubenswrapper[26425]: E0217 15:58:33.444183 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:00:35.444166285 +0000 UTC m=+2697.335890103 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 15:58:36.268502 master-0 kubenswrapper[26425]: I0217 15:58:36.268379 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-4lmzn"] Feb 17 15:58:36.294809 master-0 kubenswrapper[26425]: I0217 15:58:36.294706 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-4lmzn"] Feb 17 15:58:36.417640 master-0 kubenswrapper[26425]: I0217 15:58:36.417555 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fde6099-c168-43b1-acbf-cbbdc3ca2435" path="/var/lib/kubelet/pods/7fde6099-c168-43b1-acbf-cbbdc3ca2435/volumes" Feb 17 15:58:37.134226 master-0 kubenswrapper[26425]: I0217 15:58:37.134165 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-87e5-account-create-update-45dj5"] Feb 17 15:58:37.169730 master-0 kubenswrapper[26425]: I0217 15:58:37.169651 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-87e5-account-create-update-45dj5"] Feb 17 15:58:38.419241 master-0 kubenswrapper[26425]: I0217 15:58:38.419181 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e" path="/var/lib/kubelet/pods/ab38c29e-22bf-46dc-ac9c-2efc64fa0c1e/volumes" Feb 17 15:58:39.129734 master-0 kubenswrapper[26425]: I0217 15:58:39.129642 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5cd4-account-create-update-hwzx4"] Feb 17 15:58:39.143213 master-0 kubenswrapper[26425]: I0217 15:58:39.143112 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f7f8-account-create-update-2x5s2"] Feb 17 15:58:39.156802 master-0 kubenswrapper[26425]: I0217 15:58:39.156722 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5cd4-account-create-update-hwzx4"] Feb 17 15:58:39.168748 master-0 kubenswrapper[26425]: I0217 15:58:39.168670 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-69tfm"] Feb 17 15:58:39.177413 master-0 kubenswrapper[26425]: I0217 15:58:39.177340 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f7f8-account-create-update-2x5s2"] Feb 17 15:58:39.186192 master-0 kubenswrapper[26425]: I0217 15:58:39.186103 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-69tfm"] Feb 17 15:58:40.047264 master-0 kubenswrapper[26425]: I0217 15:58:40.047177 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pbs2f"] Feb 17 15:58:40.060623 master-0 kubenswrapper[26425]: I0217 15:58:40.060555 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pbs2f"] Feb 17 15:58:40.410964 master-0 kubenswrapper[26425]: I0217 15:58:40.410909 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0531c200-ea9b-4ed4-8e7a-ef60e88b8447" path="/var/lib/kubelet/pods/0531c200-ea9b-4ed4-8e7a-ef60e88b8447/volumes" Feb 17 15:58:40.411566 master-0 kubenswrapper[26425]: I0217 15:58:40.411537 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8" path="/var/lib/kubelet/pods/5a31ffb7-3788-4095-aa10-a7e5ca6ec7b8/volumes" Feb 17 15:58:40.412111 master-0 kubenswrapper[26425]: I0217 15:58:40.412084 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f15465-91a0-44b7-813b-7b3d36d81bd5" path="/var/lib/kubelet/pods/60f15465-91a0-44b7-813b-7b3d36d81bd5/volumes" Feb 17 15:58:40.412937 master-0 kubenswrapper[26425]: I0217 15:58:40.412639 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44bf429-8fa4-486c-ab29-eea74da59e3d" path="/var/lib/kubelet/pods/d44bf429-8fa4-486c-ab29-eea74da59e3d/volumes" Feb 17 15:58:41.050638 master-0 kubenswrapper[26425]: I0217 15:58:41.045403 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-x86bq"] Feb 17 15:58:41.059573 master-0 kubenswrapper[26425]: I0217 15:58:41.059516 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-x86bq"] Feb 17 15:58:42.415431 master-0 kubenswrapper[26425]: I0217 15:58:42.415354 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0704cefc-181d-40ab-ba9c-a204b5f85727" path="/var/lib/kubelet/pods/0704cefc-181d-40ab-ba9c-a204b5f85727/volumes" Feb 17 15:58:54.011688 master-0 kubenswrapper[26425]: I0217 15:58:54.011607 26425 scope.go:117] "RemoveContainer" containerID="0def1e902fd277938055117ef5e556aeeca6a61c9aab95a4a574f56cd2a7c5ac" Feb 17 15:58:54.084164 master-0 kubenswrapper[26425]: I0217 15:58:54.084063 26425 scope.go:117] "RemoveContainer" containerID="fbd7bd9ececd5cc6a0ff7ae37eb6d6c44ae7099e4315291721f390452760d9e0" Feb 17 15:58:54.147535 master-0 kubenswrapper[26425]: I0217 15:58:54.147445 26425 scope.go:117] "RemoveContainer" containerID="6a0bc9d9d1cc5cd5109ffc72849a04122b21698823b3fb0996bf7486e598b42e" Feb 17 15:58:54.196136 master-0 kubenswrapper[26425]: I0217 15:58:54.196072 26425 scope.go:117] "RemoveContainer" containerID="58831b6af58318199993d4aab760283e53af16aeeed02ff18022ebff88e51d30" Feb 17 15:58:54.252034 master-0 kubenswrapper[26425]: I0217 15:58:54.251979 26425 scope.go:117] "RemoveContainer" containerID="d423eed300799e9cc4d1570f67629003077b5c16d1b830d2104c74b29a6ad990" Feb 17 15:58:54.311033 master-0 kubenswrapper[26425]: I0217 15:58:54.310982 26425 scope.go:117] "RemoveContainer" containerID="175440b5d051ec5ecc23fe38127fe38ac3f6e39814c6637d0a0b21a8990aa777" Feb 17 15:58:54.338729 master-0 kubenswrapper[26425]: I0217 15:58:54.338669 26425 scope.go:117] "RemoveContainer" containerID="e11f1900c268d0c56f6662e9d2994680bba2a762c92975b43117920ae0e0c212" Feb 17 15:58:54.367185 master-0 kubenswrapper[26425]: I0217 15:58:54.367136 26425 scope.go:117] "RemoveContainer" containerID="84df170bf27564f44d0dbc24c00f5ced3ae912d748647862b5d60374038b8fd0" Feb 17 15:58:54.392172 master-0 kubenswrapper[26425]: I0217 15:58:54.392117 26425 scope.go:117] "RemoveContainer" containerID="e86aa7cd8eb2364bb61f01a987eadcfc1f4f4b956be4744d7e71b5921eb41fca" Feb 17 15:58:54.421157 master-0 kubenswrapper[26425]: I0217 15:58:54.421108 26425 scope.go:117] "RemoveContainer" containerID="4f5a04768269329cc7afdfaa29dde4d88746161efd2f4d4256568136ef8459f2" Feb 17 15:58:54.445607 master-0 kubenswrapper[26425]: I0217 15:58:54.445562 26425 scope.go:117] "RemoveContainer" containerID="7e5e0391d8c32e08824683c890492f0a9a9dd8718cd8c97a3e0f2c389c1cf0d4" Feb 17 15:59:25.118099 master-0 kubenswrapper[26425]: I0217 15:59:25.117003 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gbxf"] Feb 17 15:59:25.138845 master-0 kubenswrapper[26425]: I0217 15:59:25.138748 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gbxf"] Feb 17 15:59:26.417407 master-0 kubenswrapper[26425]: I0217 15:59:26.417324 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ef05a2-b338-4a41-9b86-147f7dd1e242" path="/var/lib/kubelet/pods/62ef05a2-b338-4a41-9b86-147f7dd1e242/volumes" Feb 17 15:59:52.083658 master-0 kubenswrapper[26425]: I0217 15:59:52.083573 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9btmx"] Feb 17 15:59:52.102531 master-0 kubenswrapper[26425]: I0217 15:59:52.101994 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9btmx"] Feb 17 15:59:52.414763 master-0 kubenswrapper[26425]: I0217 15:59:52.414687 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5964ec6-84ef-4164-8701-252638ec2109" path="/var/lib/kubelet/pods/a5964ec6-84ef-4164-8701-252638ec2109/volumes" Feb 17 15:59:54.070679 master-0 kubenswrapper[26425]: I0217 15:59:54.070567 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4vxwz"] Feb 17 15:59:54.080011 master-0 kubenswrapper[26425]: I0217 15:59:54.079922 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4vxwz"] Feb 17 15:59:54.423790 master-0 kubenswrapper[26425]: I0217 15:59:54.423634 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e34b203-c823-4193-99ca-d9d8f89c1c41" path="/var/lib/kubelet/pods/0e34b203-c823-4193-99ca-d9d8f89c1c41/volumes" Feb 17 15:59:54.798119 master-0 kubenswrapper[26425]: I0217 15:59:54.797905 26425 scope.go:117] "RemoveContainer" containerID="59c85c1e1341b23cfffb0a683db7dd911aa6d9778114ba9881eae9d12be587a1" Feb 17 15:59:54.860510 master-0 kubenswrapper[26425]: I0217 15:59:54.860416 26425 scope.go:117] "RemoveContainer" containerID="0137005ba5d60d8db1168f17cfcc284062553c4866d6b7f97c77299378990b3e" Feb 17 15:59:54.940412 master-0 kubenswrapper[26425]: I0217 15:59:54.940192 26425 scope.go:117] "RemoveContainer" containerID="2178fd764ff761480f789a27e868dfa3b752f5be6338e55bf282759a55624309" Feb 17 16:00:00.220582 master-0 kubenswrapper[26425]: I0217 16:00:00.220496 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s"] Feb 17 16:00:00.222691 master-0 kubenswrapper[26425]: I0217 16:00:00.222646 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.231987 master-0 kubenswrapper[26425]: I0217 16:00:00.231920 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:00:00.232243 master-0 kubenswrapper[26425]: I0217 16:00:00.231926 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-fqc4f" Feb 17 16:00:00.237666 master-0 kubenswrapper[26425]: I0217 16:00:00.233630 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s"] Feb 17 16:00:00.335346 master-0 kubenswrapper[26425]: I0217 16:00:00.335283 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.335656 master-0 kubenswrapper[26425]: I0217 16:00:00.335594 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdctq\" (UniqueName: \"kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.335737 master-0 kubenswrapper[26425]: I0217 16:00:00.335693 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.437623 master-0 kubenswrapper[26425]: I0217 16:00:00.437535 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.437873 master-0 kubenswrapper[26425]: I0217 16:00:00.437822 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdctq\" (UniqueName: \"kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.437926 master-0 kubenswrapper[26425]: I0217 16:00:00.437889 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.438758 master-0 kubenswrapper[26425]: I0217 16:00:00.438677 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.451607 master-0 kubenswrapper[26425]: I0217 16:00:00.443556 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.465108 master-0 kubenswrapper[26425]: I0217 16:00:00.465050 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdctq\" (UniqueName: \"kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq\") pod \"collect-profiles-29522400-hgd4s\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:00.554576 master-0 kubenswrapper[26425]: I0217 16:00:00.554357 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:01.046619 master-0 kubenswrapper[26425]: W0217 16:00:01.046546 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1aca6d5_7b68_41da_a4bf_d0edec3765c2.slice/crio-57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2 WatchSource:0}: Error finding container 57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2: Status 404 returned error can't find the container with id 57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2 Feb 17 16:00:01.047384 master-0 kubenswrapper[26425]: I0217 16:00:01.047323 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s"] Feb 17 16:00:01.231120 master-0 kubenswrapper[26425]: I0217 16:00:01.231033 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" event={"ID":"b1aca6d5-7b68-41da-a4bf-d0edec3765c2","Type":"ContainerStarted","Data":"57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2"} Feb 17 16:00:02.248436 master-0 kubenswrapper[26425]: I0217 16:00:02.248375 26425 generic.go:334] "Generic (PLEG): container finished" podID="b1aca6d5-7b68-41da-a4bf-d0edec3765c2" containerID="1a928cddc2ade3e00830f4525a46a55d38d3ae0a232b8a12768afcbfa717ae4f" exitCode=0 Feb 17 16:00:02.248436 master-0 kubenswrapper[26425]: I0217 16:00:02.248426 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" event={"ID":"b1aca6d5-7b68-41da-a4bf-d0edec3765c2","Type":"ContainerDied","Data":"1a928cddc2ade3e00830f4525a46a55d38d3ae0a232b8a12768afcbfa717ae4f"} Feb 17 16:00:03.767275 master-0 kubenswrapper[26425]: I0217 16:00:03.767192 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:03.854167 master-0 kubenswrapper[26425]: I0217 16:00:03.854079 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdctq\" (UniqueName: \"kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq\") pod \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " Feb 17 16:00:03.854512 master-0 kubenswrapper[26425]: I0217 16:00:03.854266 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume\") pod \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " Feb 17 16:00:03.854512 master-0 kubenswrapper[26425]: I0217 16:00:03.854322 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume\") pod \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\" (UID: \"b1aca6d5-7b68-41da-a4bf-d0edec3765c2\") " Feb 17 16:00:03.855695 master-0 kubenswrapper[26425]: I0217 16:00:03.855647 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1aca6d5-7b68-41da-a4bf-d0edec3765c2" (UID: "b1aca6d5-7b68-41da-a4bf-d0edec3765c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:03.857939 master-0 kubenswrapper[26425]: I0217 16:00:03.857888 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq" (OuterVolumeSpecName: "kube-api-access-wdctq") pod "b1aca6d5-7b68-41da-a4bf-d0edec3765c2" (UID: "b1aca6d5-7b68-41da-a4bf-d0edec3765c2"). InnerVolumeSpecName "kube-api-access-wdctq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:03.860562 master-0 kubenswrapper[26425]: I0217 16:00:03.860423 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1aca6d5-7b68-41da-a4bf-d0edec3765c2" (UID: "b1aca6d5-7b68-41da-a4bf-d0edec3765c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:03.959481 master-0 kubenswrapper[26425]: I0217 16:00:03.959369 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdctq\" (UniqueName: \"kubernetes.io/projected/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-kube-api-access-wdctq\") on node \"master-0\" DevicePath \"\"" Feb 17 16:00:03.959481 master-0 kubenswrapper[26425]: I0217 16:00:03.959444 26425 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 16:00:03.959481 master-0 kubenswrapper[26425]: I0217 16:00:03.959466 26425 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1aca6d5-7b68-41da-a4bf-d0edec3765c2-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 16:00:04.278668 master-0 kubenswrapper[26425]: I0217 16:00:04.278509 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" event={"ID":"b1aca6d5-7b68-41da-a4bf-d0edec3765c2","Type":"ContainerDied","Data":"57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2"} Feb 17 16:00:04.278668 master-0 kubenswrapper[26425]: I0217 16:00:04.278569 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fa5a0dd89136c32d45f1a841836cc20c993fa8bd6f80fa9a6d91987580c6f2" Feb 17 16:00:04.278668 master-0 kubenswrapper[26425]: I0217 16:00:04.278588 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s" Feb 17 16:00:04.940678 master-0 kubenswrapper[26425]: I0217 16:00:04.940600 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq"] Feb 17 16:00:04.952832 master-0 kubenswrapper[26425]: I0217 16:00:04.952746 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq"] Feb 17 16:00:06.422151 master-0 kubenswrapper[26425]: I0217 16:00:06.422039 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5899ff-327d-4944-b3ae-84d82973d0a5" path="/var/lib/kubelet/pods/ee5899ff-327d-4944-b3ae-84d82973d0a5/volumes" Feb 17 16:00:16.440748 master-0 kubenswrapper[26425]: I0217 16:00:16.440507 26425 patch_prober.go:28] interesting pod/catalog-operator-588944557d-kjh2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:00:16.440748 master-0 kubenswrapper[26425]: I0217 16:00:16.440630 26425 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v" podUID="08e27254-e906-484a-b346-036f898be3ae" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 16:00:16.711493 master-0 kubenswrapper[26425]: I0217 16:00:16.711268 26425 trace.go:236] Trace[1332728766]: "Calculate volume metrics of glance for pod openstack/glance-7b9c2-default-external-api-0" (17-Feb-2026 16:00:15.550) (total time: 1160ms): Feb 17 16:00:16.711493 master-0 kubenswrapper[26425]: Trace[1332728766]: [1.160511489s] [1.160511489s] END Feb 17 16:00:31.087300 master-0 kubenswrapper[26425]: I0217 16:00:31.087189 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-7vrrr"] Feb 17 16:00:31.102347 master-0 kubenswrapper[26425]: I0217 16:00:31.102253 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-7vrrr"] Feb 17 16:00:32.415663 master-0 kubenswrapper[26425]: I0217 16:00:32.413973 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afe145e-02b2-47d2-9b6c-b828c271aa68" path="/var/lib/kubelet/pods/6afe145e-02b2-47d2-9b6c-b828c271aa68/volumes" Feb 17 16:00:33.088106 master-0 kubenswrapper[26425]: I0217 16:00:33.088019 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x59m"] Feb 17 16:00:33.107187 master-0 kubenswrapper[26425]: I0217 16:00:33.107041 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x59m"] Feb 17 16:00:34.421654 master-0 kubenswrapper[26425]: I0217 16:00:34.421507 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6af31fb-9e97-4939-9320-2ed232a3a039" path="/var/lib/kubelet/pods/c6af31fb-9e97-4939-9320-2ed232a3a039/volumes" Feb 17 16:00:35.451656 master-0 kubenswrapper[26425]: I0217 16:00:35.451594 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:00:35.452155 master-0 kubenswrapper[26425]: E0217 16:00:35.451991 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:00:35.452155 master-0 kubenswrapper[26425]: E0217 16:00:35.452013 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:00:35.452155 master-0 kubenswrapper[26425]: E0217 16:00:35.452064 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:02:37.452046611 +0000 UTC m=+2819.343770439 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:00:55.090001 master-0 kubenswrapper[26425]: I0217 16:00:55.089881 26425 scope.go:117] "RemoveContainer" containerID="1c78601402238fce171a7e1f66830051a044eb87c43bdb26c3a5847d62615724" Feb 17 16:00:55.158386 master-0 kubenswrapper[26425]: I0217 16:00:55.157727 26425 scope.go:117] "RemoveContainer" containerID="fb2581f4bbd1b0dec1d9b05d2d73cf6a6f4673b29c16df7a7ea16e4a276ae4f7" Feb 17 16:00:55.220632 master-0 kubenswrapper[26425]: I0217 16:00:55.220579 26425 scope.go:117] "RemoveContainer" containerID="3c59779e2c3acceff9a6741b9ce7f2f36e0bae77e413da5b192e5056ce1e9f29" Feb 17 16:01:00.570795 master-0 kubenswrapper[26425]: I0217 16:01:00.570722 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522401-79wwl"] Feb 17 16:01:00.571663 master-0 kubenswrapper[26425]: E0217 16:01:00.571298 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aca6d5-7b68-41da-a4bf-d0edec3765c2" containerName="collect-profiles" Feb 17 16:01:00.571663 master-0 kubenswrapper[26425]: I0217 16:01:00.571313 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aca6d5-7b68-41da-a4bf-d0edec3765c2" containerName="collect-profiles" Feb 17 16:01:00.571663 master-0 kubenswrapper[26425]: I0217 16:01:00.571651 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aca6d5-7b68-41da-a4bf-d0edec3765c2" containerName="collect-profiles" Feb 17 16:01:00.572478 master-0 kubenswrapper[26425]: I0217 16:01:00.572429 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.596590 master-0 kubenswrapper[26425]: I0217 16:01:00.592083 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522401-79wwl"] Feb 17 16:01:00.627930 master-0 kubenswrapper[26425]: I0217 16:01:00.627745 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.627930 master-0 kubenswrapper[26425]: I0217 16:01:00.627839 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.628294 master-0 kubenswrapper[26425]: I0217 16:01:00.628124 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cj28\" (UniqueName: \"kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.628516 master-0 kubenswrapper[26425]: I0217 16:01:00.628418 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.733623 master-0 kubenswrapper[26425]: I0217 16:01:00.731175 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.733623 master-0 kubenswrapper[26425]: I0217 16:01:00.731282 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.733623 master-0 kubenswrapper[26425]: I0217 16:01:00.731311 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.733623 master-0 kubenswrapper[26425]: I0217 16:01:00.731453 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cj28\" (UniqueName: \"kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.741896 master-0 kubenswrapper[26425]: I0217 16:01:00.735659 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.741896 master-0 kubenswrapper[26425]: I0217 16:01:00.738582 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.746571 master-0 kubenswrapper[26425]: I0217 16:01:00.745879 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.759118 master-0 kubenswrapper[26425]: I0217 16:01:00.758789 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cj28\" (UniqueName: \"kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28\") pod \"keystone-cron-29522401-79wwl\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:00.907042 master-0 kubenswrapper[26425]: I0217 16:01:00.906970 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:01.414645 master-0 kubenswrapper[26425]: I0217 16:01:01.414585 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522401-79wwl"] Feb 17 16:01:02.123498 master-0 kubenswrapper[26425]: I0217 16:01:02.120361 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522401-79wwl" event={"ID":"fa39e658-3d86-474f-a1d0-5d6cb60a0097","Type":"ContainerStarted","Data":"2ba55cef936c8ea41c8e75092d3081eaaa927a434a46b541a4d88f883344671c"} Feb 17 16:01:02.123498 master-0 kubenswrapper[26425]: I0217 16:01:02.120444 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522401-79wwl" event={"ID":"fa39e658-3d86-474f-a1d0-5d6cb60a0097","Type":"ContainerStarted","Data":"925f68cf0fa0ed7b6c2ffe6835cd8effd024b0c12f4c878aab02f47e3fae1aa3"} Feb 17 16:01:02.344252 master-0 kubenswrapper[26425]: I0217 16:01:02.336387 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522401-79wwl" podStartSLOduration=2.336360487 podStartE2EDuration="2.336360487s" podCreationTimestamp="2026-02-17 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:02.334387868 +0000 UTC m=+2724.226111726" watchObservedRunningTime="2026-02-17 16:01:02.336360487 +0000 UTC m=+2724.228084325" Feb 17 16:01:04.150653 master-0 kubenswrapper[26425]: I0217 16:01:04.150567 26425 generic.go:334] "Generic (PLEG): container finished" podID="fa39e658-3d86-474f-a1d0-5d6cb60a0097" containerID="2ba55cef936c8ea41c8e75092d3081eaaa927a434a46b541a4d88f883344671c" exitCode=0 Feb 17 16:01:04.151743 master-0 kubenswrapper[26425]: I0217 16:01:04.151651 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522401-79wwl" event={"ID":"fa39e658-3d86-474f-a1d0-5d6cb60a0097","Type":"ContainerDied","Data":"2ba55cef936c8ea41c8e75092d3081eaaa927a434a46b541a4d88f883344671c"} Feb 17 16:01:05.638581 master-0 kubenswrapper[26425]: I0217 16:01:05.638524 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:01:05.723492 master-0 kubenswrapper[26425]: I0217 16:01:05.721439 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cj28\" (UniqueName: \"kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28\") pod \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " Feb 17 16:01:05.723492 master-0 kubenswrapper[26425]: I0217 16:01:05.721708 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys\") pod \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " Feb 17 16:01:05.723492 master-0 kubenswrapper[26425]: I0217 16:01:05.721783 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle\") pod \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " Feb 17 16:01:05.723492 master-0 kubenswrapper[26425]: I0217 16:01:05.722023 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data\") pod \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\" (UID: \"fa39e658-3d86-474f-a1d0-5d6cb60a0097\") " Feb 17 16:01:05.733986 master-0 kubenswrapper[26425]: I0217 16:01:05.731692 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28" (OuterVolumeSpecName: "kube-api-access-4cj28") pod "fa39e658-3d86-474f-a1d0-5d6cb60a0097" (UID: "fa39e658-3d86-474f-a1d0-5d6cb60a0097"). InnerVolumeSpecName "kube-api-access-4cj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:05.747494 master-0 kubenswrapper[26425]: I0217 16:01:05.737662 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fa39e658-3d86-474f-a1d0-5d6cb60a0097" (UID: "fa39e658-3d86-474f-a1d0-5d6cb60a0097"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:05.791480 master-0 kubenswrapper[26425]: I0217 16:01:05.791228 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa39e658-3d86-474f-a1d0-5d6cb60a0097" (UID: "fa39e658-3d86-474f-a1d0-5d6cb60a0097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:05.828172 master-0 kubenswrapper[26425]: I0217 16:01:05.826148 26425 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 17 16:01:05.828172 master-0 kubenswrapper[26425]: I0217 16:01:05.826200 26425 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 17 16:01:05.828172 master-0 kubenswrapper[26425]: I0217 16:01:05.826212 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cj28\" (UniqueName: \"kubernetes.io/projected/fa39e658-3d86-474f-a1d0-5d6cb60a0097-kube-api-access-4cj28\") on node \"master-0\" DevicePath \"\"" Feb 17 16:01:05.891562 master-0 kubenswrapper[26425]: I0217 16:01:05.889596 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data" (OuterVolumeSpecName: "config-data") pod "fa39e658-3d86-474f-a1d0-5d6cb60a0097" (UID: "fa39e658-3d86-474f-a1d0-5d6cb60a0097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:05.928858 master-0 kubenswrapper[26425]: I0217 16:01:05.928779 26425 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa39e658-3d86-474f-a1d0-5d6cb60a0097-config-data\") on node \"master-0\" DevicePath \"\"" Feb 17 16:01:06.190295 master-0 kubenswrapper[26425]: I0217 16:01:06.190221 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522401-79wwl" event={"ID":"fa39e658-3d86-474f-a1d0-5d6cb60a0097","Type":"ContainerDied","Data":"925f68cf0fa0ed7b6c2ffe6835cd8effd024b0c12f4c878aab02f47e3fae1aa3"} Feb 17 16:01:06.190295 master-0 kubenswrapper[26425]: I0217 16:01:06.190272 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925f68cf0fa0ed7b6c2ffe6835cd8effd024b0c12f4c878aab02f47e3fae1aa3" Feb 17 16:01:06.190782 master-0 kubenswrapper[26425]: I0217 16:01:06.190356 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522401-79wwl" Feb 17 16:02:37.457077 master-0 kubenswrapper[26425]: I0217 16:02:37.456976 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:02:37.459763 master-0 kubenswrapper[26425]: E0217 16:02:37.459709 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:02:37.459763 master-0 kubenswrapper[26425]: E0217 16:02:37.459750 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:02:37.459911 master-0 kubenswrapper[26425]: E0217 16:02:37.459793 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:04:39.459776194 +0000 UTC m=+2941.351500012 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:04:39.473044 master-0 kubenswrapper[26425]: I0217 16:04:39.472977 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:04:39.488171 master-0 kubenswrapper[26425]: E0217 16:04:39.488114 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:04:39.488171 master-0 kubenswrapper[26425]: E0217 16:04:39.488166 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:04:39.488370 master-0 kubenswrapper[26425]: E0217 16:04:39.488230 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:06:41.48820917 +0000 UTC m=+3063.379933018 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:06:41.502244 master-0 kubenswrapper[26425]: I0217 16:06:41.502110 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:06:41.504075 master-0 kubenswrapper[26425]: E0217 16:06:41.503180 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:06:41.504075 master-0 kubenswrapper[26425]: E0217 16:06:41.503231 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:06:41.504075 master-0 kubenswrapper[26425]: E0217 16:06:41.503314 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:08:43.503280438 +0000 UTC m=+3185.395004286 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:08:43.537779 master-0 kubenswrapper[26425]: I0217 16:08:43.537654 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:08:43.538900 master-0 kubenswrapper[26425]: E0217 16:08:43.537869 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:08:43.538900 master-0 kubenswrapper[26425]: E0217 16:08:43.537906 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:08:43.538900 master-0 kubenswrapper[26425]: E0217 16:08:43.537964 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:10:45.537946352 +0000 UTC m=+3307.429670170 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:10:45.560803 master-0 kubenswrapper[26425]: I0217 16:10:45.560735 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:10:45.561293 master-0 kubenswrapper[26425]: E0217 16:10:45.560890 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:10:45.561293 master-0 kubenswrapper[26425]: E0217 16:10:45.560930 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:10:45.561293 master-0 kubenswrapper[26425]: E0217 16:10:45.560989 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:12:47.560966631 +0000 UTC m=+3429.452690509 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:12:47.655768 master-0 kubenswrapper[26425]: I0217 16:12:47.655532 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:12:47.655768 master-0 kubenswrapper[26425]: E0217 16:12:47.655731 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:12:47.655768 master-0 kubenswrapper[26425]: E0217 16:12:47.655774 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:12:47.656758 master-0 kubenswrapper[26425]: E0217 16:12:47.655831 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:49.655812641 +0000 UTC m=+3551.547536459 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:14:49.681476 master-0 kubenswrapper[26425]: I0217 16:14:49.681373 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:14:49.682568 master-0 kubenswrapper[26425]: E0217 16:14:49.682527 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:14:49.682701 master-0 kubenswrapper[26425]: E0217 16:14:49.682683 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:14:49.682867 master-0 kubenswrapper[26425]: E0217 16:14:49.682849 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:16:51.682828055 +0000 UTC m=+3673.574551893 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:15:00.180697 master-0 kubenswrapper[26425]: I0217 16:15:00.179588 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2"] Feb 17 16:15:00.180697 master-0 kubenswrapper[26425]: E0217 16:15:00.180395 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa39e658-3d86-474f-a1d0-5d6cb60a0097" containerName="keystone-cron" Feb 17 16:15:00.180697 master-0 kubenswrapper[26425]: I0217 16:15:00.180421 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa39e658-3d86-474f-a1d0-5d6cb60a0097" containerName="keystone-cron" Feb 17 16:15:00.181547 master-0 kubenswrapper[26425]: I0217 16:15:00.180923 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa39e658-3d86-474f-a1d0-5d6cb60a0097" containerName="keystone-cron" Feb 17 16:15:00.182363 master-0 kubenswrapper[26425]: I0217 16:15:00.182322 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.185840 master-0 kubenswrapper[26425]: I0217 16:15:00.185794 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00.186029 master-0 kubenswrapper[26425]: I0217 16:15:00.186000 26425 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-fqc4f" Feb 17 16:15:00.242142 master-0 kubenswrapper[26425]: I0217 16:15:00.241491 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2"] Feb 17 16:15:00.283077 master-0 kubenswrapper[26425]: I0217 16:15:00.281732 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.283077 master-0 kubenswrapper[26425]: I0217 16:15:00.281808 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g5dv\" (UniqueName: \"kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.283077 master-0 kubenswrapper[26425]: I0217 16:15:00.281973 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.383567 master-0 kubenswrapper[26425]: I0217 16:15:00.383502 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.384001 master-0 kubenswrapper[26425]: I0217 16:15:00.383759 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g5dv\" (UniqueName: \"kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.384925 master-0 kubenswrapper[26425]: I0217 16:15:00.384224 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.384925 master-0 kubenswrapper[26425]: I0217 16:15:00.384635 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.395037 master-0 kubenswrapper[26425]: I0217 16:15:00.391186 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.400365 master-0 kubenswrapper[26425]: I0217 16:15:00.400319 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g5dv\" (UniqueName: \"kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv\") pod \"collect-profiles-29522415-dwsg2\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:00.545239 master-0 kubenswrapper[26425]: I0217 16:15:00.545124 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:01.044196 master-0 kubenswrapper[26425]: I0217 16:15:01.044100 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2"] Feb 17 16:15:01.051317 master-0 kubenswrapper[26425]: W0217 16:15:01.051231 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43ed693_18c0_4bc8_a41c_b8fa72325330.slice/crio-115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f WatchSource:0}: Error finding container 115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f: Status 404 returned error can't find the container with id 115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f Feb 17 16:15:01.468833 master-0 kubenswrapper[26425]: I0217 16:15:01.468683 26425 generic.go:334] "Generic (PLEG): container finished" podID="b43ed693-18c0-4bc8-a41c-b8fa72325330" containerID="c871660942f9af150a8102e48d96445e9f25c1da03e92aacf885fefc37a2315e" exitCode=0 Feb 17 16:15:01.469627 master-0 kubenswrapper[26425]: I0217 16:15:01.468847 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" event={"ID":"b43ed693-18c0-4bc8-a41c-b8fa72325330","Type":"ContainerDied","Data":"c871660942f9af150a8102e48d96445e9f25c1da03e92aacf885fefc37a2315e"} Feb 17 16:15:01.469627 master-0 kubenswrapper[26425]: I0217 16:15:01.468928 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" event={"ID":"b43ed693-18c0-4bc8-a41c-b8fa72325330","Type":"ContainerStarted","Data":"115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f"} Feb 17 16:15:02.926592 master-0 kubenswrapper[26425]: I0217 16:15:02.926483 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.962442 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume\") pod \"b43ed693-18c0-4bc8-a41c-b8fa72325330\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.962565 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume\") pod \"b43ed693-18c0-4bc8-a41c-b8fa72325330\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.962668 26425 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g5dv\" (UniqueName: \"kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv\") pod \"b43ed693-18c0-4bc8-a41c-b8fa72325330\" (UID: \"b43ed693-18c0-4bc8-a41c-b8fa72325330\") " Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.966629 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv" (OuterVolumeSpecName: "kube-api-access-2g5dv") pod "b43ed693-18c0-4bc8-a41c-b8fa72325330" (UID: "b43ed693-18c0-4bc8-a41c-b8fa72325330"). InnerVolumeSpecName "kube-api-access-2g5dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.967076 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume" (OuterVolumeSpecName: "config-volume") pod "b43ed693-18c0-4bc8-a41c-b8fa72325330" (UID: "b43ed693-18c0-4bc8-a41c-b8fa72325330"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:02.969760 master-0 kubenswrapper[26425]: I0217 16:15:02.968543 26425 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b43ed693-18c0-4bc8-a41c-b8fa72325330" (UID: "b43ed693-18c0-4bc8-a41c-b8fa72325330"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:03.066083 master-0 kubenswrapper[26425]: I0217 16:15:03.066014 26425 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g5dv\" (UniqueName: \"kubernetes.io/projected/b43ed693-18c0-4bc8-a41c-b8fa72325330-kube-api-access-2g5dv\") on node \"master-0\" DevicePath \"\"" Feb 17 16:15:03.066083 master-0 kubenswrapper[26425]: I0217 16:15:03.066062 26425 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43ed693-18c0-4bc8-a41c-b8fa72325330-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 16:15:03.066083 master-0 kubenswrapper[26425]: I0217 16:15:03.066073 26425 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43ed693-18c0-4bc8-a41c-b8fa72325330-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 17 16:15:03.497591 master-0 kubenswrapper[26425]: I0217 16:15:03.497548 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" event={"ID":"b43ed693-18c0-4bc8-a41c-b8fa72325330","Type":"ContainerDied","Data":"115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f"} Feb 17 16:15:03.497841 master-0 kubenswrapper[26425]: I0217 16:15:03.497828 26425 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="115c2db016a1ea48096ecd990c80f7f63eeddf74874d403b4eef18c2de25c34f" Feb 17 16:15:03.497921 master-0 kubenswrapper[26425]: I0217 16:15:03.497621 26425 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2" Feb 17 16:15:04.036351 master-0 kubenswrapper[26425]: I0217 16:15:04.036282 26425 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs"] Feb 17 16:15:04.045375 master-0 kubenswrapper[26425]: I0217 16:15:04.045201 26425 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs"] Feb 17 16:15:04.409392 master-0 kubenswrapper[26425]: I0217 16:15:04.409298 26425 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a230c99f-570a-4822-ad0c-8f8052fc667f" path="/var/lib/kubelet/pods/a230c99f-570a-4822-ad0c-8f8052fc667f/volumes" Feb 17 16:15:55.764109 master-0 kubenswrapper[26425]: I0217 16:15:55.763929 26425 scope.go:117] "RemoveContainer" containerID="531d85836ed5dab3d5cfeea1a836ccd1b3b6e1d0b13e903732c2ea5c862593f9" Feb 17 16:16:51.775676 master-0 kubenswrapper[26425]: I0217 16:16:51.754296 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:16:51.775676 master-0 kubenswrapper[26425]: E0217 16:16:51.754762 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:16:51.775676 master-0 kubenswrapper[26425]: E0217 16:16:51.754783 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:16:51.775676 master-0 kubenswrapper[26425]: E0217 16:16:51.754828 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:18:53.754813145 +0000 UTC m=+3795.646536963 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:18:53.771563 master-0 kubenswrapper[26425]: I0217 16:18:53.771339 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:18:53.772676 master-0 kubenswrapper[26425]: E0217 16:18:53.771595 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:18:53.772676 master-0 kubenswrapper[26425]: E0217 16:18:53.771643 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:18:53.772676 master-0 kubenswrapper[26425]: E0217 16:18:53.771721 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:20:55.771698022 +0000 UTC m=+3917.663421850 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:20:55.818527 master-0 kubenswrapper[26425]: I0217 16:20:55.818436 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:20:55.819162 master-0 kubenswrapper[26425]: E0217 16:20:55.818626 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:20:55.819162 master-0 kubenswrapper[26425]: E0217 16:20:55.818659 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:20:55.819162 master-0 kubenswrapper[26425]: E0217 16:20:55.818706 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:22:57.818689464 +0000 UTC m=+4039.710413282 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:22:57.843257 master-0 kubenswrapper[26425]: I0217 16:22:57.843152 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:22:57.844413 master-0 kubenswrapper[26425]: E0217 16:22:57.843402 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:22:57.844413 master-0 kubenswrapper[26425]: E0217 16:22:57.843488 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:22:57.844413 master-0 kubenswrapper[26425]: E0217 16:22:57.843561 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:24:59.843540563 +0000 UTC m=+4161.735264391 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:24:59.937566 master-0 kubenswrapper[26425]: I0217 16:24:59.936209 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d3daf534-9a77-49c6-964f-d402c5d5a2ac\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 17 16:24:59.937566 master-0 kubenswrapper[26425]: E0217 16:24:59.936542 26425 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:24:59.937566 master-0 kubenswrapper[26425]: E0217 16:24:59.936607 26425 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:24:59.937566 master-0 kubenswrapper[26425]: E0217 16:24:59.936698 26425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access podName:d3daf534-9a77-49c6-964f-d402c5d5a2ac nodeName:}" failed. No retries permitted until 2026-02-17 16:27:01.936671402 +0000 UTC m=+4283.828395250 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d3daf534-9a77-49c6-964f-d402c5d5a2ac-kube-api-access") pod "installer-3-master-0" (UID: "d3daf534-9a77-49c6-964f-d402c5d5a2ac") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 17 16:26:00.368631 master-0 kubenswrapper[26425]: I0217 16:26:00.368517 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vn6lm/must-gather-nlw64"] Feb 17 16:26:00.369907 master-0 kubenswrapper[26425]: E0217 16:26:00.369886 26425 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43ed693-18c0-4bc8-a41c-b8fa72325330" containerName="collect-profiles" Feb 17 16:26:00.369987 master-0 kubenswrapper[26425]: I0217 16:26:00.369976 26425 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43ed693-18c0-4bc8-a41c-b8fa72325330" containerName="collect-profiles" Feb 17 16:26:00.370272 master-0 kubenswrapper[26425]: I0217 16:26:00.370257 26425 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43ed693-18c0-4bc8-a41c-b8fa72325330" containerName="collect-profiles" Feb 17 16:26:00.371612 master-0 kubenswrapper[26425]: I0217 16:26:00.371592 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.385302 master-0 kubenswrapper[26425]: I0217 16:26:00.377491 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vn6lm"/"kube-root-ca.crt" Feb 17 16:26:00.385302 master-0 kubenswrapper[26425]: I0217 16:26:00.377733 26425 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vn6lm"/"openshift-service-ca.crt" Feb 17 16:26:00.465486 master-0 kubenswrapper[26425]: I0217 16:26:00.454006 26425 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vn6lm/must-gather-bdp57"] Feb 17 16:26:00.465486 master-0 kubenswrapper[26425]: I0217 16:26:00.456379 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vn6lm/must-gather-nlw64"] Feb 17 16:26:00.465486 master-0 kubenswrapper[26425]: I0217 16:26:00.456422 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vn6lm/must-gather-bdp57"] Feb 17 16:26:00.465486 master-0 kubenswrapper[26425]: I0217 16:26:00.456581 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.534730 master-0 kubenswrapper[26425]: I0217 16:26:00.531852 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqsn\" (UniqueName: \"kubernetes.io/projected/7caee532-3fde-4c03-a5b5-7f8e5c360766-kube-api-access-zkqsn\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.534730 master-0 kubenswrapper[26425]: I0217 16:26:00.532484 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7caee532-3fde-4c03-a5b5-7f8e5c360766-must-gather-output\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.634541 master-0 kubenswrapper[26425]: I0217 16:26:00.634127 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqsn\" (UniqueName: \"kubernetes.io/projected/7caee532-3fde-4c03-a5b5-7f8e5c360766-kube-api-access-zkqsn\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.634541 master-0 kubenswrapper[26425]: I0217 16:26:00.634392 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7caee532-3fde-4c03-a5b5-7f8e5c360766-must-gather-output\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.634752 master-0 kubenswrapper[26425]: I0217 16:26:00.634654 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2jl\" (UniqueName: \"kubernetes.io/projected/97969a31-4c51-40a8-9b54-93757d0ed610-kube-api-access-nb2jl\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.634794 master-0 kubenswrapper[26425]: I0217 16:26:00.634766 26425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97969a31-4c51-40a8-9b54-93757d0ed610-must-gather-output\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.636127 master-0 kubenswrapper[26425]: I0217 16:26:00.635049 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7caee532-3fde-4c03-a5b5-7f8e5c360766-must-gather-output\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.661423 master-0 kubenswrapper[26425]: I0217 16:26:00.661368 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqsn\" (UniqueName: \"kubernetes.io/projected/7caee532-3fde-4c03-a5b5-7f8e5c360766-kube-api-access-zkqsn\") pod \"must-gather-nlw64\" (UID: \"7caee532-3fde-4c03-a5b5-7f8e5c360766\") " pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.736860 master-0 kubenswrapper[26425]: I0217 16:26:00.736770 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2jl\" (UniqueName: \"kubernetes.io/projected/97969a31-4c51-40a8-9b54-93757d0ed610-kube-api-access-nb2jl\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.737093 master-0 kubenswrapper[26425]: I0217 16:26:00.736895 26425 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97969a31-4c51-40a8-9b54-93757d0ed610-must-gather-output\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.737426 master-0 kubenswrapper[26425]: I0217 16:26:00.737384 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/97969a31-4c51-40a8-9b54-93757d0ed610-must-gather-output\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.748917 master-0 kubenswrapper[26425]: I0217 16:26:00.748287 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vn6lm/must-gather-nlw64" Feb 17 16:26:00.752064 master-0 kubenswrapper[26425]: I0217 16:26:00.752022 26425 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2jl\" (UniqueName: \"kubernetes.io/projected/97969a31-4c51-40a8-9b54-93757d0ed610-kube-api-access-nb2jl\") pod \"must-gather-bdp57\" (UID: \"97969a31-4c51-40a8-9b54-93757d0ed610\") " pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:00.842597 master-0 kubenswrapper[26425]: I0217 16:26:00.842519 26425 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vn6lm/must-gather-bdp57" Feb 17 16:26:01.318338 master-0 kubenswrapper[26425]: I0217 16:26:01.318291 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vn6lm/must-gather-nlw64"] Feb 17 16:26:01.318726 master-0 kubenswrapper[26425]: W0217 16:26:01.318684 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7caee532_3fde_4c03_a5b5_7f8e5c360766.slice/crio-efa1c43e71e8b23670d6f1dfaafbe1aa3d63316ff9af087a6f847437fee32186 WatchSource:0}: Error finding container efa1c43e71e8b23670d6f1dfaafbe1aa3d63316ff9af087a6f847437fee32186: Status 404 returned error can't find the container with id efa1c43e71e8b23670d6f1dfaafbe1aa3d63316ff9af087a6f847437fee32186 Feb 17 16:26:01.321789 master-0 kubenswrapper[26425]: I0217 16:26:01.321255 26425 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:26:01.352901 master-0 kubenswrapper[26425]: I0217 16:26:01.352840 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vn6lm/must-gather-nlw64" event={"ID":"7caee532-3fde-4c03-a5b5-7f8e5c360766","Type":"ContainerStarted","Data":"efa1c43e71e8b23670d6f1dfaafbe1aa3d63316ff9af087a6f847437fee32186"} Feb 17 16:26:01.427560 master-0 kubenswrapper[26425]: I0217 16:26:01.427484 26425 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vn6lm/must-gather-bdp57"] Feb 17 16:26:01.428899 master-0 kubenswrapper[26425]: W0217 16:26:01.428852 26425 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97969a31_4c51_40a8_9b54_93757d0ed610.slice/crio-a76908a122764dd241ad08b34237f884823c085e238181a92a529e675adbf4d0 WatchSource:0}: Error finding container a76908a122764dd241ad08b34237f884823c085e238181a92a529e675adbf4d0: Status 404 returned error can't find the container with id a76908a122764dd241ad08b34237f884823c085e238181a92a529e675adbf4d0 Feb 17 16:26:02.366425 master-0 kubenswrapper[26425]: I0217 16:26:02.366370 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vn6lm/must-gather-bdp57" event={"ID":"97969a31-4c51-40a8-9b54-93757d0ed610","Type":"ContainerStarted","Data":"a76908a122764dd241ad08b34237f884823c085e238181a92a529e675adbf4d0"} Feb 17 16:26:03.384427 master-0 kubenswrapper[26425]: I0217 16:26:03.384332 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vn6lm/must-gather-nlw64" event={"ID":"7caee532-3fde-4c03-a5b5-7f8e5c360766","Type":"ContainerStarted","Data":"9e44845428e1a5050329e03f150d0648324d3c22597435c9f5e5d459eb73fbed"} Feb 17 16:26:03.384427 master-0 kubenswrapper[26425]: I0217 16:26:03.384423 26425 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vn6lm/must-gather-nlw64" event={"ID":"7caee532-3fde-4c03-a5b5-7f8e5c360766","Type":"ContainerStarted","Data":"81619a285a1fadd751426d95d625960e0c253ebe50b6c14954c38fc012cb9567"} Feb 17 16:26:03.424599 master-0 kubenswrapper[26425]: I0217 16:26:03.420366 26425 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vn6lm/must-gather-nlw64" podStartSLOduration=2.3969226 podStartE2EDuration="3.420342635s" podCreationTimestamp="2026-02-17 16:26:00 +0000 UTC" firstStartedPulling="2026-02-17 16:26:01.32121442 +0000 UTC m=+4223.212938238" lastFinishedPulling="2026-02-17 16:26:02.344634455 +0000 UTC m=+4224.236358273" observedRunningTime="2026-02-17 16:26:03.406079393 +0000 UTC m=+4225.297803221" watchObservedRunningTime="2026-02-17 16:26:03.420342635 +0000 UTC m=+4225.312066443" Feb 17 16:26:04.632137 master-0 kubenswrapper[26425]: I0217 16:26:04.631549 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-7kdb7_626c4f7a-59ee-45da-9198-05dd2c42ac42/cluster-version-operator/0.log" Feb 17 16:26:05.272034 master-0 kubenswrapper[26425]: I0217 16:26:05.271957 26425 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-7kdb7_626c4f7a-59ee-45da-9198-05dd2c42ac42/cluster-version-operator/1.log"